You are on page 1of 393

UNIX

System Administration with Solaris 11.3:


A Course for Beginners

Professor Paul A. Watters
Copyright 2016 British Scientific Press, An Imprint of the Aylesbury Trust
All rights reserved.
ISBN: 978-1523450084
ISBN-13: 1523450088

Solaris is a registered trademark of Oracle and/or its affiliates. UNIX is a registered trademark of The Open Group.
Other names may be trademarks of their respective owners.





UNIX System Administration with Solaris 11.3:
A Course for Beginners

Professor Paul A. Watters

Chapter 0 Course Overview


Overview
Learning Outcomes
Course Author
Getting Started

Meet The Author - Paul A. Watters


Web-Based Readings
Course Specific Technology Requirements
Course Structure
Module 1: System Concepts
Module 2: Boot PROM and System Installation
Module 3: System Initialization and User Management
Module 4: Security and Process Control
Module 5: Files, Directories and File systems
Module 6: Booting and Disk Configuration
Module 7: Disks, Backup and Restore
Module 8: Basic Commands, Editors and Remote Access
Module 9: Clients and Servers
Module 10: Solaris Network Environment
Module 11: System Log Configuration
Module 12: Disk Management
Module 13: Processes, Threads and CPUs
Assignments Overview

Module 1: System Concepts


Overview
Learning Outcomes
Path to Complete the Module

Readings
Theme 1.1: Key Features of Solaris
Features of UNIX

Computer Systems and Operating Systems

Theme 1.2: Exam Preparation


System and Network Administrators

Theme 1.3: Solaris Concepts


Kernel
Daemons
Shell
File system
Getting Help

Assignment 1.1: Obtain Solaris


Description
Procedure

Module 2 - Boot PROM and System Installation


Overview
Learning Outcomes
Path to Complete the Module

Readings
Theme 2.1: OpenBoot PROM Monitor
Theme 2.2: SPARC Computer Systems
Theme 2.3: Solaris Installation
Theme 2.4: Package Management
Theme 2.5: Patch Management
Assignment 2.1: Solaris Key Benefits
Description
Procedure

Assignment 2.2: Package Installation


Description
Procedure

Module 3 - System Initialization and User Management


Overview

Learning Outcomes
Path to Complete the Module

Readings
Theme 3.1: Run Levels
Exercise 3A: Run Levels

Theme 3.2: Service Management Facility


Exercise 3B: SMC

Theme 3.3: Monitoring System Access


Exercise 3C: Monitoring System Access

Theme 3.4: User Management


Exercise 3D: User Management

Assignment 3.1: Changing run levels


Description
Procedure

Assignment 3.2: Adding users


Description
Procedure

Module 4 - Security and Process Control


Overview
Learning Outcomes
Path to Complete the Module

Readings
Theme 4.1 - Regular Expressions
Exercise 4A: Regular Expressions

Theme 4.2 - File Security


Exercise 4B:File Security

Theme 4.3: Access Control Lists


Exercise 4C: Access Control Lists

Theme 4.4: Processes


Exercise 4D: Processes

Theme 4.5: Signals


Exercise 4E: Signals

Assignment 4.1: Monitoring Processes


Description
Procedure

Assignment 4.2: File Permissions


Description
Procedure

Assignment 4.3: Access Control Lists


Description
Procedure

Module 5 - Files, Directories and File Systems


Overview
Learning Outcomes
Path to Complete the Module

Readings
Theme 5.1: File System Overview
Disk Devices
File Systems
Exercise 5A: File System Overview

Theme 5.2: File Types


Exercise 5B:File Types and Permissions

Theme 5.3: Mounting File Systems


Exercise 5C: Mounting File Systems

Theme 5.4: Volume Manager


Exercise 5D: Volume Manager

Theme 5.5: File Compression


Exercise 5E: File Compression

Assignment 5.1: File System Features


Description

Procedure

Assignment 5.2: Copying Files


Description
Procedure

Assignment 5.3: Symbolic Links


Description
Procedure

Module 6 - Booting and Disk Configuration


Overview
Learning Outcomes
Path to Complete the Module

Readings
Theme 6.1: Booting
Exercise 6A: Booting

Theme 6.2: Making File Systems


Exercise 6B:Making File Systems

Theme 6.3: Monitoring File Systems


Exercise 6C: Monitoring File Systems

Theme 6.4: Repairing File Systems


Exercise 6D: Repairing File Systems

Assignment 6.1: Disk Volumes


Description
Procedure

Assignment 6.2: System Configuration


Description
Procedure

Assignment 6.3: Raw and Block Devices


Description
Procedure

Module 7 - Disks, Backup and Restore

Overview
Learning Outcomes
Path to Complete the Module

Readings
Theme 7.1: Format
Exercise 7A: Format

Theme 7.2: Partition


Exercise 7B: Partition

Theme 7.3: Backup


Exercise 7C: Backup

Theme 7.4: Restore


Exercise 7D: Restore

Assignment 7.1: Format


Description
Procedure

Assignment 7.2: Ufsdump


Description
Procedure

Assignment 7.3: Ufsrestore


Description
Procedure

Module 8 - Basic Commands, Editors and Remote Access


Overview
Learning Outcomes
Path to Complete the Module

Readings
Theme 8.1: Basic Commands
Exercise 8A: Basic Commands

Theme 8.2: Editor


Exercise 8B: Editor

Theme 8.3: Remote Access


Exercise 8C: Remote Access

Assignment 8.1: Major Assignment


Description
Procedure

Module 9 - Clients and Servers


Overview
Learning Outcomes
Path to Complete the Module

Readings
Theme 9.1: Servers
Exercise 9A: Servers

Theme 9.2: Clients


Module 10: Solaris Network Environment
Overview
Learning Outcomes
Path to Complete the Module

Readings
Theme 10.1: OSI Stack
Exercise 10A: OSI Stack

Theme 10.2: TCP/IP Stack


Exercise 10B: TCP/IP Stack

Theme 10.3: Network Interfaces


Exercise 10C:Network Interfaces

Assignment 10.1: Client/Server Benefits


Description
Procedure

Assignment 10.2: RPC Services


Description
Procedure

Module 11 - System Logging and Auditing


Overview
Learning Outcomes
Path to Complete the Module

Readings
Theme 11.1: Syslog daemon
Exercise 11A: Syslog daemon

Theme 11.2: Syslog configuration


Exercise 11B: Syslog configuration

Theme 11.3: Using syslog


Exercise 11C: Syslog configuration

Theme 11.4: Syslog and the command-line


Exercise 11D: Syslog configuration

Assignment 11.1: Monitoring syslog


Description
Procedure

Assignment 11.2: Modifying syslog.conf


Description
Procedure

Module 12 - Disk Management and Pseudo File Systems


Overview
Learning Outcomes
Path to Complete the Module

Readings
Theme 12.1: File systems
Exercise 12A: File systems

Theme 12.2: Volume management


Exercise 12B: Volume management

Theme 12.3: Using Volume Manager


Exercise 12C: Using Volume Manager

Theme 12.4: The proc file system


Exercise 12D: The /proc file system

Theme 12.5: Virtual memory


Exercise 12E: Virtual memory

Assignment 12.1: Using fsck


Description
Procedure

Assignment 12.2: Creating virtual memory


Description
Procedure

Module 13: Processes, Threads and CPU Scheduling


Overview
Learning Outcomes
Path to Complete the Module

Readings
Theme 13.1: Processes and Threads
Exercise 13A: Processes and Threads

Theme 13.2: Process Monitoring


Exercise 13B: Process Monitoring

Theme 13.3: Real-time Scheduling


Exercise 13C: Real-time Scheduling

Theme 13.4: CPU Monitoring


Exercise 13D: CPU Monitoring

Assignment 13.1: DNLC


Description
Procedure

Assignment 13.2: Inode Statistics


Description
Procedure

Chapter 0 Course Overview


Welcome to UNIX System Administration.

Overview
This section is a general welcome and a basic description of the course.

Learning Outcomes
Upon successful completion of this course, students will be able to:
Describe key Solaris operating system concepts.
State the steps required to operate with the OpenBoot PROM monitor.
Install a Solaris system.
Initialize a Solaris system for user access.
Manage users and groups.
Implement security and process control strategies.
Administer files, directories and file systems.
Manage booting and disk configuration.
Administer disks.
Perform backup and restore operations.
Execute basic commands.
Use the vi editor.
Remotely access client systems.

Course Author
Meet the course author, Dr Paul A. Watters.

Getting Started
This course is intended to be a basic introduction to UNIX system administration. It is not
intended to be en encyclopedic reference; it is designed to introduce the UNIX system to
you, and equip you with basic skills to manage and run your own systems. You will learn
to use Solaris 11.3, the latest version of the UNIX operating system. The goal is to get you
used to working in a UNIX-like way rather than teaching you every possible command or
technique. Note that advanced topics like zones and ZFS are not covered in this, but all
commands have been tested on Solaris 11.3.

Meet The Author - Paul A. Watters


Paul A. Watters received his PhD in computer science from Macquarie University,
Sydney, Australia. He has also earned degrees from the University of Cambridge,
University of Tasmania, and the University of Newcastle. Dr. Watters has written several
books on the Solaris operating environment, including Solaris 8: The Complete Reference,
Solaris Administration: A Beginners Guide, Solaris 8 All-In-One Certification Guide, and
Solaris 8 Administrators Guide.
After a stint dealing with security and privacy of electronic health records at the Medical
Research Council in the United Kingdom, Dr Watters moved to the University of Ballarat
in 2008, to become the first Research Director of the Internet Commerce Security
Laboratory (ICSL), a partnership between Westpac, IBM, the State Government of
Victoria, and the Australian Federal Police (AFP). The ICSLs goal was to build capability
in the cybercrime field, and to make Victoria the state of choice to undertake this type of
work. In addition to numerous research publications, and skilled graduates who now
protect Australias cyber frontline, the ICSL also produced significant outcomes for its
research partners in the areas of threat mitigation (phishing, malware, identity theft,
scams, piracy, child exploitation) and intelligence gathering. Dr Watters undertook
consultancies for numerous external clients, including the Australian Federation Against
Copyright Theft (AFACT), the Attorney Generals Department (AGD) and Google. While
on sabbatical with the AFP, he developed an approach to detecting drug deals online.

In 2013, Dr Watters took up a Professorship in IT at Massey University in New Zealand.
He continued his work in online threats, especially focusing on advertising as a vector for
malware delivery and social harms. He also won two Callaghan Innovation grants to
develop new algorithms for data analytics. He partnered with NGOs such End Child
Prostitution and Trafficking (ECPAT) to systematically examine the links between film
piracy and the proliferation of child abuse material online.

In 2015, Dr Watters also became an Adjunct Professor at Unitec Institute of Technology,
the home of New Zealands first cyber security research centre. In recognition of his track
record combating child abuse material online, he received an ARC Discovery grant in

2015 with colleagues at the University of Tasmania, University of Canberra and


University College London.

Dr Watters now works as an independent cybercrime expert and is available for
consultancies. He welcome enquiries from all potential clients. His security skills include
intelligence, threat monitoring and risk assessment, operational assurance, auditing,
penetration testing, forensics and malware analysis. He also has many years experience
managing and developing systems, especially those with an analytics or data mining
focus.

Web-Based Readings
This course includes required online readings. You will access them from links within
each module where they are assigned. A list of the articles appears below.

Oracle Microsystems, System Administration Guide: Basic Administration. Available
freely at https://docs.oracle.com/cd/E19253-01/817-1985/817-1985.pdf.[1]

Whats new in Solaris 11.3. Available freely at
http://docs.oracle.com/cd/E53394_01/html/E54847/index.html.
Course Specific Technology Requirements
Students will need to obtain access to a Solaris system by using one of the following
methods:

1. Obtain Solaris for Intel from Oracle Microsystems
(http://www.oracle.com/technetwork/serverstorage/solaris11/downloads/index.html). Obtain software (such as Boot Magic) that
will allow Solaris for Intel to be installed on a PC and dual-booted, if required; or
2. Obtain Solaris on a SPARC system by purchasing a second-hand SPARC system
from EBay (www.ebay.com), and Solaris for SPARC from Oracle Microsystems.
New systems can be purchased new from Oracle for around $1,000; or
3. Create a free user account with a public access UNIX system like sdf.lonestar.org.

Course Structure
There are thirteen modules in this course, each of which includes specific readings and
assignments. Each module is briefly described below.
Module 1: System Concepts
This module introduces the concept of a Solaris system in the context of the enterprise.
Understanding the material in this module provides a basis for students to distinguish the
roles of system administrator and network administrator. In addition, students explore the
history and current hot topics in the Solaris operating environment and SunOS operating
system. Since industry certification is a key measure of the courses success in providing a
comprehensive introduction to enterprise systems, some exams tips and tricks will be
covered. Key concepts, including daemons, shells, file systems, and the kernel will be
covered in detail, from a theoretical perspective, as well as with full details of the Solaris
implementation. We also discuss pragmatic aspects of using Solaris, including how to
obtain on-line help.
Module 2: Boot PROM and System Installation
Many PC users are familiar with the operating system BIOS which controls various
bootstrapping and initialization issues. While SPARC hardware also features a system for
bootstrapping, known as the OpenBoot PROM monitor, this facility is far more complex
than a PC. For a start, the PROM monitor has a complete implementation of the Forth
programming language, making it possible to customize complex system settings. In
addition, the PROM monitor can be used to test and secure hardware, and prepare a
system for installation and configuration. Once a systems PROM monitor has been
configured, the Solaris operating environment, including the SunOS 5.11 operating system
(ie, Solaris 11.3), can be installed. However, preparing for and conducting an installation
requires some knowledge of the different hardware devices, types and systems supported
by Solaris, which are reviewed in this module. Finally, a complete walk through of the
installation process is presented.
Module 3: System Initialization and User Management
Once a system has been installed, various system initialization tasks need to be completed
before the system is ready to be deployed. Understanding what to configure at this stage is
as important as knowing how to configure it. Key concepts will be introduced, including
the notion of a system run level, and how to manage and modify the system
configuration and startup files. To use a system, users need to login using a username and
password. The basic processes behind authentication will be discussed in this module,
including practical issues like password selection. In addition, we investigate how to add,
delete and modify users and groups on the system by using command-line and GUI tools.
We also review how to examine the users who are logged into a system at any given time.
Module 4: Security and Process Control
Processes allow jobs to be performed on a Solaris system. They provide the envelope for

executing system calls, functions and other routines from within an application. Every
program running on a Solaris system, including user shells, must run as a process. Thus,
its critical to understand how to work with and manage processes. Solaris provides tools
to display information about processes, and send signals to active processes instructing
them to terminate or restart. Process monitoring tools are an important operational aspect
of managing a Solaris system. Linked with the concept of processes is security process
security, file security and user security. Every file on a Solaris file system has a
permissions string associated with it, allowing users, group members and all other users to
read, write and execute files, according to the permission string. From a security
perspective, its important to understand how file permissions can easily allow intruders
access to a system if not set appropriately. In addition, default file permissions and highlevel access control lists complement the standard UNIX file permission model on Solaris.
Module 5: Files, Directories and File systems
Processes allow jobs to be performed on a Solaris system. They provide the envelope for
executing system calls, functions and other routines from within an application. Every
program running on a Solaris system, including user shells, must run as a process. Thus,
its critical to understand how to work with and manage processes. Solaris provides tools
to display information about processes, and send signals to active processes instructing
them to terminate or restart. Process monitoring tools are an important operational aspect
of managing a Solaris system. Solaris provides a number of different tools which operate
on files and file systems, including the volume manager, which allows floppy disks and
CD-ROM discs to be mounted and unmounted by unprivileged users. In addition, a
number of compression programs can be applied to individual files to increase the amount
of space available for other applications. In this module, we will review all of the standard
Solaris tools that perform file operations.
Module 6: Booting and Disk Configuration
The booting process of a Solaris can be quite complex, since literally hundreds of services
can be started. This requires efficient use of CPU time and advanced memory
management. We discuss both of these issues with respect to the Solaris boot process, and
the various boot and shutdown commands that can be used to manage a Solaris system.
Services are started and stopped by using scripts in the /etc/init.d directory which we will
examine in detail. Before disks can be used to host file systems, as discussed in the
previous module, they need to be physically added to the system. This operation can either
be performed while the system has been powered down, or in real-time by using the
correct command sequence. This high availability option is one of the best features of
Solaris in a production environment, since downtime should never occur. We will
examine disk procedures closely, in addition to examining the different types of disk
device which map physical disk characteristics to logical system entities.
Module 7: Disks, Backup and Restore
Before disks can be used on a system, they must be formatted to ensure that no surface
errors exist that would prevent data being read and/or written correctly. The format

command is complex and contains a number of options, including surface analysis, which
are explained in this module. Once a disk file system has been created, it needs to be
backed up on a regular basis, by using a full or incremental dump. This ensures that, when
(not if) the disk eventually experiences a media failure, the contents of the disk can be
restored easily. In this module, the standard Solaris backup and restore procedures are
covered in depth.
Module 8: Basic Commands, Editors and Remote Access
Since much of the operation of a Solaris system involves command-line administration,
its important to become competent with using the shell and the various utilities that can
be used with pipelines and other logical operators. Students will learn the bulk of these
commands and shell logic in this module, although some aspects will have been covered
in previous chapters. Basic commands to create, delete or update files will be given.
Special emphasis will be placed on editing new and existing text files by using the visual
editor (vi). Remote access to a Solaris system allows multiple users to login concurrently,
spawn separate shells, and execute different jobs. After mastering all of the topics covered
in this course, these skills can finally be applied to solving real world problems by
allowing other users to login to a system, and provide services. This module covers the
basic aspects of TCP/IP networking required to manage and support remote services, and
discusses some of the key security issues associated with providing remote access. We
also cover the configuration of local and remote printing services.
Module 9: Clients and Servers
Solaris provides a solid foundation for client-server computing. In this module, you will
learn about common ways to host services, and some of the most frequently used clients,
especially for network services. Server and client installation and maintenance are covered
in detail.
Module 10: Solaris Network Environment
Networking provides a means for Solaris clients and servers to communicate with each
other. You will first learn about the conceptual Open Systems Interconnection (OSI) model
for networks, and then learn about the Transmission Control Protocol / Internet Protocol
(TCP/IP) stack.
Module 11: System Log Configuration
Accounting for application and user process utilization lays the foundation for billing and
capacity planning. In this module, you will learn how to configure system logging, and
how to monitor the syslog.
Module 12: Disk Management
Managing storage is a complex issue, since there are many different file formats and uses,
including virtual memory. In this module, you will learn about volume management, file
system repairs, and the /proc file system.

Module 13: Processes, Threads and CPUs


All activity within the operating system is associated with a specific heavyweight process
or a lightweight thread. Processes and threads can be assigned to handle specific
applications, which can be helpful when managing resources on a large system. In this
module, you will learn how to manage processes and threads.

Assignments Overview
The course emphasizes pragmatic administration skills by encouraging students to become
familiar with the standard UNIX shells that allow user and administrator processes to be
spawned. Small assignments and quizzes form the basis for assessment. Students will also
be required to prepare a large paper on a topic related to UNIX systems administration,
emphasizing how UNIX assist in solving a specific industry problem. For example,
students might choose to write about the relationship between UNIX, Java and ecommerce, and how Solaris high availability is critical to ensure application scalability.

Module 1: System Concepts


Overview
This module introduces the concept of a Solaris system in the context of the enterprise. In
a systems world dominated by high availability and e-commerce, network operating
systems, such as UNIX, have become synonymous with providing fault tolerant, scalable
services for the enterprise. In conjunction with the rise of e-business and distributed
systems, concerns have been raised about security and scalability for enterprises that need
to develop and deploy complex applications, potentially to a worldwide audience.
Understanding the material in this module provides a basis for students to distinguish the
roles of system administrator and network administrator. In addition, students explore the
history and current hot topics in the Solaris operating environment and SunOS operating
system. Since industry certification is a key measure of the courses success in providing a
comprehensive introduction to enterprise systems, some exams tips and tricks will be
covered.

Key concepts, including daemons, shells, file systems, and the kernel will be covered in
detail, from a theoretical perspective, as well as with full details of the Solaris
implementation. Pragmatic aspects of using Solaris, including how to obtain on-line help,
will also be presented.

Learning Outcomes
Upon successful completion of this course, students will be able to:

Describe the key features of the Solaris operating environment and SunOS operating
system.
Define the roles and responsibilities of a Solaris system administrator and network
administrator.
Describe the requirements the three exams needed for certification.
Discuss strategies for exam success.
Describe daemons, shell, file system, kernel, operating system.
Utilize methods for obtaining help, including man pages.

Path to Complete the Module


For best results, you may wish to follow the course authors suggested path as outlined
below.

1.
2.
3.
4.
5.

Read Theme 1: Key Features of Solaris.


Read Theme 2: Exam Preparation
Read Theme 3: Solaris Concepts
Complete Assignment 1.1: Profile.
Complete Assignment 1.2: Obtain Solaris

Readings
You may wish to complete the readings for this module in the order suggested.

https://docs.oracle.com/cd/E53394_01/html/E54847/

Theme 1.1: Key Features of Solaris


Solaris is an enterprise operating system that features a multi-user, multi-process, and
multi-threaded architecture. It is designed primarily for business users who must host
complex applications like database systems, application servers, and Customer
Relationship Management (CRM) systems. Solaris is developed and marketed by Oracle,
who have pioneered UNIX workstation and server development since 1982 through the
Oracle Microsystems business. With the slogan the network is the computer, Oracle
have focused on integrating networking with enterprise systems, developing industry
standard protocols like the Network File System (NFS), which allows disk volumes to be
exported from a server and logically mounted on a number of different clients. In addition,
Oracle developed the Java programming language, which has come to dominate the
enterprise application server market, with many organizations making use of component
technologies, like Enterprise Java Beans (EJBs) to abstract and centralize data
processing operations.

Oracles business is selling mid to high-end server systems, based on the SPARC
architecture, which is also supported by other high-end system developers, such as Fujitsu.
In recent years, Oracle has acknowledged the widespread use of Intel CPUs in industry,
and have developed an Intel-compatible version of Solaris. While many users who want to
use a UNIX-style operating system have opted for Linux on the Intel platform, Solaris for
Intel has developed a following over the years. Solaris for Intel has one advantage when
learning the UNIX operating system it can be obtained for a low cost from Oracle, and it
can be installed and booted from your home PC. This provides an ideal home learning
environment for administrators who need to learn the appropriate skills to run big iron at
work, in a sand-boxed environment.

To give you an idea of the kind of systems that Oracle develop, take the E15K Oracle Fire
system it supports over 100 CPUs in a single system, and 512G RAM. Every system
component, including power supplies, disk drives, disk controllers and buses are
redundant this means if one component fails, its role is automatically assumed by its
partner. Most of these components are also hot-swappable that is, when a failure occurs,
the component can be replaced while the system is still running. For example, when a disk
drive fails, that is mirrored through RAID (Redundant Array of Inexpensive Disk)
technology, data is still written to its mirrors, while it is removed, replaced and bought
back on-line. No reboots are necessary, and users are unaware that a failure has occurred.
This type of configuration is known as a highly available configuration, since the system
never fails, even though individual components do fail. This type of performance is
critical for businesses that must operate reliably, 24x7. It is a long way from the blue
screen of death that some administrators are used to it is not unusual for Solaris
systems to have up times measures in years not hours. As you learn more about Solaris,
and use it in your daily work life, you may become as enthusiastic about its performance

as many of its advocates are.


Features of UNIX
Solaris systems, being an implementation of a UNIX system, generally share the
following features:

A kernel, which is the core of the operating system, written in the C language.
Applications interact with the kernel by using system calls.
Hardware devices are represented logically by device files.
File systems are hierarchical, providing a directory structure, and provide faultrecovery solutions like journaling.
Multi-user processing in a client/server environment allows multiple users to boot
from the same server. Thousands of users may perform operations on a single system
concurrently.
Multi-process architecture allows multiple applications and services to execute
concurrently.
Multi-thread architecture allows processes to create Light Weight Processes
(LWPs) that have much less overhead than individual processes, reducing resource
usage by discrete tasks.
A set of standard text and flat-file database processing tools that allow configuration
files to modified in a consistent way. Interestingly, after years of developing
proprietary binary format configuration files, many vendors now use text-based
XML (eXtensible Markup Language) for system configuration.
A consistent Character User Interface (CUI), provided by a user shell.
A consistent Graphical User Interface (GUI), provided by X11 and the Common
Desktop Environment (CDE).
Application architectures based on small, discrete programs or components that can
be logically sequenced to perform complex operations by using pipes, redirection
operators and other shell built-ins.
Application developer support is a priority, by providing easy-to-use APIs and
standard system libraries that are consistent with standard C libraries.

Computer Systems and Operating Systems


If youre an experienced administrator or developer, most of these terms will be familiar.


However, for those who are new to operating systems, and their relationship to hardware,
should review the following elements of computer systems:

Central Processing Unit (CPU)
Memory
Input/Output devices
Buses

In terms of memory operation, the following elements should be reviewed:

User-visible registers
Data registers
Address registers
Index register
Segment pointer
Stack pointer
Control registers
Status registers
Cache principles and design

With respect to the CPU, the following elements of instruction execution should be
reviewed:

Program execution
Fetching instructions
Instruction execution
Program interrupts
Timer interrupts
I/O interrupts
Hardware failure interrupts

All computer systems must have an operating system like Solaris. The following features
of operating systems should be reviewed:

User/computer interface
Resource management

Evolution and upgrade path


Serial/batch processing
Multitasking
Time-sharing
Processes
Memory management
Information security
Scheduling

Theme 1.2: Exam Preparation


The purpose of this theme is to introduce students to the certification of basic
administration tasks required to manage an enterprise system. The platform chosen for
presenting this material is Oracles Solaris 11.3 operating environment, which is based on
the SunOS 5.11 operating system. However, the skills learned could equally be applied to
previous versions. As the most popular version of UNIX, Solaris administrators find that
their skills can be usefully applied to both BSD and System V UNIX versions running on
any platform, such as Linux or IBMs AIX.

After completing this course, students should have a solid foundation for sitting some of
the Solaris certification exams (http://education.oracle.com/pls/web_prod-plqdad/db_pages.getlppage?page_id=212&path=OS11).

In this topic, the examination and assessment of the these exams will be examined, and
strategies for success discussed.

System and Network Administrators


For the purposes of certification, Oracle distinguish between two different types of
administrator: a system administrator and a network administrator. A system administrator
generally has a scope of responsibility for managing single or multiple systems as
systems: that is, they are not responsible for managing the network architecture, which is
the obligation of the network administrator. A system administrator is responsible for
performing the following tasks:

Setting hardware parameters using the OpenBoot PROM monitor
Starting up and shutting down systems
Performing user and group administration tasks
Ensuring that individual systems are secure
Performing process control and managing user jobs
Installing, initializing and configuring file systems
Creating files and directories
Managing the boot process, including startup scripts (/etc/init.d)
Formatting disks
Performing backup and recovery operations
Executing shell commands and writing scripts
Using the editor
Setting up remote connections
Managing clients and servers
Configuring hosts for networking
Managing pseudo file systems
Configuring virtual memory
Sharing volumes using the Network File System (NFS)
Sharing home directories using the automounter and NFS
Tuning file system performance by using caches
Installing naming services on the local system (DNS, NIS and NIS+)
Configuring role-based access control for privileged commands
Managing the system using the Solaris Management Console (SMC)
Performing multiple system installations by using JumpStart

In addition to these responsibilities, the network administrator performs the following
tasks:

Designing networks
Creating subnets
Configuring multiple network interfaces for multi-homed hosts
Configuring multiple network interfaces for firewalls and routers
Setting up DNS domains
Setting up NIS/NIS+ domains

Managing diskless clients using ARP and RARP


Managing internet services
Configuring routing daemons
Developing policies on permitted ports and transports
Managing the network
Configuring centralized network timing
Troubleshooting network problems
Transitioning from IPv4 to IPv6

Becoming a system administrator is the first step to becoming a network administrator.
Approaching the exams from the point-of-view of becoming a knowledgeable and
experienced administrator, as well as a person who can pass the exam, will serve your
interests better in the long run.

Theme 1.3: Solaris Concepts


Before installing and using Solaris, its useful to understand some key concepts. Being
able to piece together the components that comprise the core functionality of Solaris will
make it easier to work effectively within the operating environment.

Kernel
The kernel is the core of the SunOS operating system it implements all of the
functionality that is necessary to support input/output and the process model. Many of the
higher level functions supported by the system, including Internet services, are executed
by daemons that are external to the kernel. Users interface with the kernel by spawning a
kernel when they login to the system. When data needs to be persisted, it is usually written
to a file system. At the heart of these more complex operations and services is the kernel.

The SunOS kernel has its roots in the Berkeley UNIX distribution (BSD), although its has
more recently become compliant with System V. The original UNIX kernel was written in
the C programming language - prior to UNIX, kernels were invariably written in
assembly language, which had to changed each time a new system architecture was
developed. Using C allowed a level of abstraction between hardware and system software
that rapidly increased the speed at which new systems could be programmed and
developed. Concurrent with these developments was the introduction of the Integrated
Circuit (IC), allowing memory chips and Central Processing Units (CPUs) to be mass
produced, at ever increasing operational frequency rates.

C programs can access kernel services directly by using system calls, or indirectly by
using system library APIs. Solaris man pages provide descriptions for the standard set of
system calls and library routines. It is not possible for user applications to communicate
with hardware devices: all commands can ultimately be traced to system calls and their
underlying basic functions invoked on specific hardware platforms.

The UNIX kernel is divided into four key component: the hardware control component;
the process management component; the file system component; and the system call
component. The hardware control component interfaces with hardware devices, and
implements the low-level operations required to read and write data to these devices. The
file system and process management components sit directly on top of the hardware
control component. The file system component implements all operations required to
support data persistence operations on disks, including raw (/dev/rdsk/) and block
(/dev/dsk/) devices. The process management component supports System V Inter-Process
Communication (IPC), process scheduling and memory management. The system call
component sits directly on top of the file system and process management components,
and provides the interface between the kernel and user applications (like the shell) or
system daemons (like the Internet Super Server, inetd), most often through system library
calls.

Daemons
Daemons are system services, which operate as independent helpers as they are not
built-in to the kernel. They provide high-level interfaces for local and remote users to
access different types of applications running on a system. All networked daemons must
have a port number defined in the services database (/etc/services). In addition, many
daemons are executed through the Internet Super Daemon (inetd), in which case, they are
defined in /etc/inetd.conf.

The following daemons are commonly found on Solaris systems:
The FTP daemon (in.ftpd), which implements a server for the File Transfer Protocol
The Telnet daemon (in.telnetd), which is a standard Telnet server for supporting
interactive logins
The remote shell daemon (in.rshd), which allows a remote user to spawn a shell on
the local system
The remote login daemon (in.rlogind), which allows a remote user to login to the
local system
The remote execution daemon (in.rexecd), which permits remote users to execute
commands on the local system
The talk daemon (in.talkd), which is a real-time chat service
The comsat daemon (in.comsat), which provides the finger service, displaying
whether a user is logged in
The UNIX-to-UNIX Copy Program (UUCP) daemon, which allows compatible
UNIX systems to copy files to each other
The Trivial FTP daemon (in.tftpd) daemon, which supports the booting of diskless
clients from the local server

This list is not meant to be exhaustive for more details, see the /etc/inetd.conf file.

Shell
The shell is the basic CUI for users to interact with the kernel. Although the Bourne shell
(/bin/sh) was the original shell developed by Steve Bourne, there are many new and
improved shells available in Solaris, including the C shell (/bin/csh), the Korn shell
(/bin/ksh) and the Bourne again shell (/bin/bash). Each of these shells offers different
programming and job management facilities. This highlights the key function of shells
although they are used to execute commands and run programs, they are also highly
programmable. This means that expert shell users can increase their productivity by
writing small shell scripts to perform repetitive tasks. Using the shell as a programmable
interface is often so time-saving that many advanced users prefer it over a GUI system like
Gnome.

The shell can be used to execute simple commands like the finger command, which
displays a list of currently logged-in users:

$ finger
Login Name TTY Idle When Where
jbloggs Joe Bloggs *pts/0 21: Wed 07:45 joe.bloggs.com
sbloggs Sue Bloggs *pts/1 11: Wed 19:32 modem1.bloggs.com
dbloggs Dana Bloggs *pts/2 09: Wed 02:56 modem2.bloggs.com


The $ here is the shell prompt you need to enter the command name here, and press
Enter, to execute commands. When executed, the finger command displays the username,
full name, terminal number, idle time, login time and client hostname for each user. Thus,
Dana bloggs logged in on terminal 2 after 2 a.m. on Wednesday from the host modem2,
and has been idle for 9 hours.

The shell contains many operators, such as the pipeline |, which allow filtering to be
performed. For example, to print only the login details for dana, we could pipe the output
from the finger command to the grep command, which is a pattern matching program,
giving the following result:

$ finger | grep dana
dbloggs Dana Bloggs *pts/2 09: Wed 02:56 modem2.bloggs.com


There are no limits to the number of commands that can be chained together in this way.
For example, to redirect and append the output of the finger and grep combination to a file
called /tmp/dana_logins.txt, the following command could be used:


$ finger | grep dana >> /tmp/dana_logins.txt

File system
Solaris supports many different types of file systems, including the following:

UNIX file system (ufs)
System V UNIX file system (s5fs)
MS-DOS file system (pcfs)
High Sierra file system (hsfs)
Zettabyte file system (zfs)

By default, Solaris uses the UNIX file system. The ufs is a hierarchical file system,
allowing directory entries to be created as special files underneath the top-level root
directory, denoted by /. Typically, the following directory entries will appear in the root
directory:

/dev device files
/devices device tree
/etc system configuration files
/home automounted home directories for users
/opt optionally installed applications
/platform kernel files
/tmp temporary file space
/usr installed applications
/var accounting and logging

A ufs file system consists of a boot block, super block and inode blocks. The boot block
(block 0) is used by the system for booting, if the disk drive is bootable. The super block
stores all of the status information for the file system, including the total number of
blocks, the number of blocks set aside for inodes, the file system name, and list of unused
inodes. The inode blocks contain all of the information about files and directories that are
stored on the file system, including user and group ownership, file size, pointers to blocks
and the date on which the file was last accessed or updated.

Getting Help
There are some excellent, and in many cases free, resources that are available to explain
Solaris concepts, terms and commands which you may not understand. The following
sequence will assist you in finding the information you require:

In a Solaris shell, type man command to see the manual page for the command.
In a web browser, connect through to the URL http://docs.Oracle.com/ and search the
Oracle system administrator and reference manuals.
In a web browser, connect through to the URL http://www.google.com/ and search
the archives of the USENET forum comp.unix.solaris. This is particularly useful for
troubleshooting problems which are not contained in the manual or in man pages.
Look at the Sun Managers list archive at http://www.sunmanagers.org

Assignment 1.1: Obtain Solaris


Description
The aim of this assignment is ensure students have access to a Solaris system to complete
assignments in the course that require shell programming, or examining system
configuration files. The various options are covered in detail in Theme 4.

Procedure
1. Obtain Solaris for Intel from Oracle Microsystems (www.Oracle.com). Obtain
software (such as Boot Magic) that will allow Solaris for Intel to be installed on a
PC and dual-booted, if required; or
2. Obtain Solaris on a SPARC system by purchasing an old SPARC system from EBay
(www.ebay.com), and Solaris for SPARC from Oracle Microsystems
(www.Oracle.com). Suitable systems include SPARCstation 10 or 20, Ultra 5 or 10,
or Oracle Blade 100. The latter can be purchased new from Oracle for around
$1,000; or
3. Create a free user account with a public access UNIX system like sdf.lonestar.org.

Module 2 - Boot PROM and System Installation


Overview
Many PC users are familiar with the operating system BIOS which controls various
bootstrapping and initialization issues. While SPARC hardware also features a system for
bootstrapping, known as the OpenBoot PROM monitor, this facility is far more complex
than a PC. For a start, the PROM monitor has a complete implementation of the Forth
programming language, making it possible to customize complex system settings. In
addition, the PROM monitor can be used to test and secure hardware, and prepare a
system for installation and configuration.

Once a systems PROM monitor has been configured, the Solaris operating environment,
including the SunOS 5.11 operating system, can be installed. However, preparing for and
conducting an installation requires some knowledge of the different hardware devices,
types and systems supported by Solaris, which are reviewed in this module. Finally, a
complete walk through of the installation process is presented.

Learning Outcomes
Upon successful completion of this course, students will be able to:

Perform actions at the OpenBOOT prompt.
Recall all OpenBOOT commands.
Configure devices using OpenBOOT.
Secure hardware using OpenBOOT.
Discuss preconfiguration strategies.
Investigate different hardware devices, types and systems
Install Solaris systems
Add new packages to the system using the pkgadd, pkginfo, pkgchk, and pkgrm
commands.
Install and manage patches by using the patchadd, patchrm, or showrev commands

Path to Complete the Module


For best results, you may wish to follow the course authors suggested path as outlined
below.

1.
2.
3.
4.
5.
6.
7.
8.

Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: OpenBoot PROM Monitor.
Read Theme 2: SPARC computer systems.
Read Theme 3: Solaris Installation.
Read Theme 4: Package Management
Read Theme 5: Patch Management
Complete Assignment 2.1: Solaris Key Benefits
Complete Assignment 2.2: Package Installation

Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.

Solaris 11 Installation Guide, freely available from docs.Oracle.com

Theme 2.1: OpenBoot PROM Monitor


The OpenBoot PROM monitor is an interface that allows administrators to interact with
the computer system hardware independently on the operating system. While it has some
similarities to a PC BIOS, in that various hardware configuration options can be set, its
capabilities extend far beyond simple configuration. These facilities include the ability to
perform integrity and troubleshooting checks on hardware devices, including the SCSI
bus. OpenBoot also provides a number of different of methods to allow booting of the
system from disk, floppy, CD-ROM, tape or the network. All commands are executed
from the ok prompt. To reach the ok prompt from a system which is hung, or which is
initializing memory, simply press STOP+a on the keyboard.

The following commands are commonly used for booting in the OpenBoot monitor:

boot boots the system using the default boot device
boot disk - boots the system using the primary hard drive
boot cdrom - boots the system using the primary CD-ROM drive
boot tape - boots the system using the primary tape drive
boot net - boots the system using a kernel which is downloaded from a server

The PROM monitor provides a number of different diagnostic tools to assist in


troubleshooting hardware problems. These problems can prevent a system from booting.
For example, if two devices attached to the SCSI bus have identical IDs, then the system
may not be able to boot. To verify that all devices attached to the bus have unique IDs, the
probe-scsi command may be used.

The following commands are commonly used for troubleshooting in the OpenBoot
monitor:

probe-scsi displays a list of SCSI devices attached to the system
test device tests that a device (such as net) is operating correctly
watch-clock tests the internal clock
watch-net tests the network connection
test-all runs all tests available on the system
obdiag runs all tests from a menu interface

Theme 2.2: SPARC Computer Systems


A key benefit for enterprise computing is the synergy created between the SunOS
operating system and the Scalable Processor ARChitecture (SPARC) CPU. While Oracle
Microsystems is responsible for the development of Solaris, the development of SPARC is
managed by an independent consortium. This allows other enterprise level hardware
vendors, such as Fujitsu (http://www.fujitsu.com/) and T.Sqware
(http://www.tsqware.com/), to build their SPARC systems that run Solaris. Indeed, some
of the best TPC benchmarks for database transaction throughput are obtained from Fujitsu
systems running Solaris.

All modern SPARC systems are based on a 64-bit architecture, leading to improvements in
application scope, arithmetic precision and execution speed. SPARC systems support
different system and peripheral buses, including SCSI and PCI. While the raw speed of
SPARC CPUs may not seem fast compared to 2GHz Pentium CPUs, the latter are only 32
bit, and are based on system buses that run at much slower speeds, and have limited
bandwidth, compared to SPARC architecture systems. SPARC systems excel at executing
large numbers of applications, and prioritizing allocation of system resources to specific
user and system processes.

The following table shows some common SPARC systems, and their kernel and
application architecture:

Application Kernel

Architecture System Name

Oracle4c

SPARCstation 1

Oracle4c

SPARCstation
IPX

Oracle4m

SPARCstation 10

Oracle4m

SPARCstation 20

Oracle4d

SPARCserver
1000

Oracle4d

SPARCcenter
2000

Oracle4u

UltraSPARC 10

Oracle4u

Enterprise 420R


A Oracle4u architecture is required to install and operate Solaris 11.3 successfully. Indeed,
a binary application compiled on a specific kernel architecture can be executed on any
other system with the same architecture. This means that the binary executable does not
need to be recreated when it is exchanged between different systems, which is useful in a
NFS environment, where file systems are shared between hosts.

The following SPARC systems are supported by Solaris:

SPARCclassic
SPARCstation LX
SPARCstation 4
SPARCstation 5
SPARCstation 10
SPARCstation 20
Ultra 1 (including Creator and Creator 3D models)
Enterprise 1
Ultra 2 (including Creator and Creator 3D models)
Ultra 5
Ultra 10
Ultra 30
Ultra 60
Ultra 450
Enterprise 2
Enterprise 150
Enterprise 250
Enterprise 450
Enterprise 3000
Enterprise 3500
Enterprise 4000
Enterprise 4500
Enterprise 5000
Enterprise 5500
Enterprise 6000
Enterprise 10000
SPARCserver 1000
SPARCcenter 2000

Theme 2.3: Solaris Installation


Solaris is typically installed using a DVD-ROM attached to the system, and requires
around 5G of disk space for the full installation using Live Media, or 2.9G for the text
installer. Prior to installation, the configuration information for your system should be
obtained from the local network administrator:

Hostname, e.g., darby
IP address, e.g., 192.204.34.58 (or whether IP addresses are leased using the
Dynamic Host Configuration Protocol)
DNS domain name, e.g., cassowary.net
NIS/NIS+ domain name, e.g., Cassowary.Net.
Subnet mask, e.g., 255.255.255.0 for a Class C network
The locale for the system if internationalization is required

If you wish to install a SPARC-based system, then insert the Installation DVD into the
DVD drive, and type the following at the OpenBoot prompt (assuming a run level of 0):

ok boot cdrom

Youll then see output like the following:

Boot device /pci@1f,0/pci@1,1/ide@2/cdrom@2,0:f File and args:
SunOS Release 5.11 Version Generic 64-bit
Copyright 1983-2016 Oracle Microsystems, Inc. All rights reserved.

After analyzing the disk, and creating a swap space partition for virtual memory, the
installer copies a limited version of the operating system to disk (the mini-root), and
reboots. This boot reconfigures the /dev and /device directories, detecting any peripheral
devices that are attached to the system. Once the system is up, then youll be led
through a series of configuration choices, which set the following installation parameters:

Network
Name services
Date and Time
Root password
Power management
Proxy server


Once the system has rebooted again, the Gnome login screen will appear, and you may
login to the system using the root username and the root password selected during the
installation procedure.

To install SPARC on an Intel platform, download the appropriate .iso disk image, and burn
it to a blank DVD. This can be used to boot the system, if installation onto a primary boot
disk is to be performed. However, some users may wish to create a virtual installation
using Oracle VirtualBox. In this case, install VirtualBox first, and then configure a virtual
machine for Solaris, and boot using the DVD.

Theme 2.4: Package Management


Packages are archives that allow different types of files, such as source code, executables
and configuration files, to be bundled together. While packages are similar to zip archives
in this respect they differ in several important ways. Firstly, package files are not
compressed they are stored in a portable format which can be read by both SPARC and
Intel Solaris systems. Secondly, files in a package are copied to the correct system
directories when the package is installed, and all of the associated authorization data (such
as file permissions and file ownership) is transferred.
You use a single command pkg to install, uninstall and update packages. Some
commands commonly used include:
pkg publisher show which repository packages will be installed from
pkgrepo info s show how many packages are available from the repository
pkg install install a package
pkg uninstall uninstall a package
pkg list get a list of updates available for packages

In previous versions of Solaris, packages could be installed and managed by using
command-line tools. The following command-line tools were commonly used to manage
packages:

pkgadd installs a package onto a system
pkgchk determines the integrity of a package file or an installed file that is part of a
package
pkginfo displays information about a package, including a file list
pkgmk initializes a package directory
pkgproto initializes a prototype file that defines the contents of a package
pkgrm uninstalls a package from a system
pkgtrans transfers a package directory into an archive file

Theme 2.5: Patch Management


Like all software, Solaris applications are sometimes shipped with undetected bugs, or
with exploits and weaknesses that are only discovered in production. Some of these are the
result of poor programming practices, while others are the normal result of an abnormal
product usage. For example, many daemons have suffered from the buffer overflow
problem, where a rogue user may gain root access to a system by causing a daemon to
crash. This can be caused by a buffer having a fixed size, and memory being overwritten
when data is written beyond the buffers array boundary. Alternatively, a webserver might
crash if a denial of service attack is launched against it. In both of these cases, Oracle or
a third party may release a patch which is designed to rectify the problem. The benefit of
using patches over installing new software versions is time a patch is often much faster
to install than a new application or operating system!

Patches allow executable code modifications to take place. Oracle patches can be
downloaded from Oracle Technology Network
(http://www.oracle.com/technetwork/systems/patches/overview/index.html). There are
two types of patches that are generally made available: individual patches that solve
specific problems, and jumbo patches, which contain a set of recommended patches for
each operating system release. After installing a new Solaris system, the first task of an
administrator is often to install the latest jumbo patch.
Note that in Solaris 11, patch management has been incorporated into the Image
Packaging System (IPS). Therefore, you need to use IPS for patching and package
management - you simply need to use the pkg command to get a list of all packages that
need updating, and they will be updated:
# pkg update

To display the currently installed patches on your system on Solaris 5.10 and earlier, the
following command can be used:

# showrev p


To install a patch called /tmp/106453-45 onto the system, the following command can be
used:

# patchadd /tmp/106453-45


Sometimes, an installed patch interferes with the operation of a system in an unexpected
way, and must be removed. The following command removes the patch 106453-45 from
the system:


# patchrm 106453-45

Assignment 2.1: Solaris Key Benefits


Description
After reading all of the reading assignments for the first module, write a 1,000 word paper
summarizing the key benefits of Solaris as expressed by those documents. Try to cut
through the marketing hype to focus on key technology advances, such as multi-user,
multi-process and multi-threading.

Procedure
1. Identify the key technology benefits of using Solaris.
2. Focus on advantages such as multi-user, multi-process and multi-threading
technology.

Assignment 2.2: Package Installation


Description
Download a new package from www.sunfreeware.com in an area that interests you, such
as the GNU text utilities. Add the package to a Solaris system by using the pkgadd
command. Remove the package, and then try to add it again using the pkg command.
Which approach do you prefer, IPS or traditional packages?

Procedure
1.
2.
3.

Download a new package from www.sunfreeware.com.


Add the package to a Solaris system by using the pkgadd command.
Remove the package, and add it again using IPS.

Module 3 - System Initialization and User Management


Overview
Once a system has been installed, various system initialization tasks need to be completed
before the system is ready to be deployed. Understanding what to configure at this stage is
as important as knowing how to configure it. A key role of a server is to provide network
and other services for users, and these are traditionally loaded at boot time. However,
newer versions of Solaris use the Service Management Facility (SMF) to manage services,
rather than manually editing service startup files located in /etc/init.d.

To use a system, users need to login using a username and password. The basic processes
behind authentication will be discussed in this module, including practical issues like
password selection. In addition, we investigate how to add, delete and modify users and
groups on the system by using command-line tools. We also review how to examine the
users who are logged into a system at any given time.

Learning Outcomes
Upon successful completion of this course, students will be able to:

Describe the process of system booting and service configuration
Describe login procedures, such as logging into a system, logging out of a system.
State how to change a password.
Describe how to show which users are currently logged into a system.
Define how to create and modify user accounts and groups using the command-line
tools (useradd, groupadd, usermod, groupmod, userdel, or groupdel commands).
Setup initialization files for user shells.

Path to Complete the Module


For best results, you may wish to follow the course authors suggested path as outlined
below.

1.
2.
3.
4.
5.
6.
7.

Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: Run Levels.
Read Theme 2: Startup Files.
Read Theme 3: Monitoring System Access.
Read Theme 4: User Management.
Complete Assignment 3.1: Creating initialization files.
Complete Assignment 3.2: Changing run levels.

Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.

Read the INSTALLING AND UPDATING SOLARIS 11 documents from http://docs.oracle.com/cd/E23824_01/
.

Theme 3.1: Run Levels


A run level defines the state of the system and its ability to run certain types of
applications and provide specific types of services. When starting up or shutting down a
Solaris system, a number of run levels are entered and/or left respectively. When the
system enters a run level, the Service Management Facility (SMF) can startup or
shutdown services as required.

Run levels allow the system to be administered or used in discrete ways. For example,
during the single user run level, only the administrator may login and modify files,
execute processes etc. The system is not available for other users at this time. This allows
hardware configuration and other changes to be made without interfering with user
processes. You can check the current run level by using the who r command:
# who r
. run-level 3 Feb 10 20:38 3 0 S

The Solaris run levels are:


Run level 0 OpenBOOT PROM monitor run level.


Run level 1 Single user run level.
Run level 2 First multi-user run level (no NFS).
Run level 3 Second multi-user run level (NFS).
Run level 4 Undefined run level.
Run level 5 Shutdown and power-off.
Run level 6 Shutdown and reboot.

Run levels are also called init states, because the init command can be used to change run
levels. Run levels can also be changed by using a number of specialized commands like
shutdown, which serve the same purpose but may involve some differences to just using
init to change run levels. For example, the shutdown command performs the following
tasks:

Allows a grace period before the shutdown occurs


Displays a warning about the impending shutdown to all logged in users
Requires confirmation before shutting down
Performs an init 5

The following example shows a shutdown using a 2 minute grace period, as viewed from
the console:

# shutdown -i0 -g120 -y

Shutdown started. Mon Apr 29 23:22:11 EST 2016


Broadcast Message from root (console) on sydney Mon Apr 29 23:22:11 EST 2016
The system will be shut down in 2 minutes
.
.
INIT: New run level: 0
The system is coming down. Please wait.
.
.
The system is down.
syncing file systems [1] [2] done
Program terminated
Type help for more information
ok

Using the init command to shutdown is easy. The following command synchronizes disk
data and then shuts down the system and powers it down:

# sync; init 6


In contrast, the following command synchronizes disk data and then reboots the system:

# sync; init 5


To boot the system in single-user mode from the OpenBoot PROM monitor, the following
command can be used:

ok boot s


When a system boots into the normal multi-user state, it starts at run level 0 and works its
way through to run level 3, which is the normal multi-state including NFS file sharing.
Run levels and their actions are ultimately defined by the /etc/inittab file that contains
entries relating to activities conducted when each run level is reached. Entries are
comprised of an identifier, init state and command to be executed, separated by colons.
The following is a sample /etc/inittab file:

ap::sysinit:/sbin/autopush -f /etc/iu.ap

ap::sysinit:/sbin/soconfig -f /etc/sock2path
fs::sysinit:/sbin/rcS sysinit >/dev/msglog 2<>/dev/msglog </dev/console
is:3:initdefault:
p3:s1234:powerfail:/usr/sbin/shutdown -y -i5 -g0 >/dev/msglog 2<>/dev/msglog
sS:s:wait:/sbin/rcS >/dev/msglog 2<>/dev/msglog </dev/console
s0:0:wait:/sbin/rc0 >/dev/msglog 2<>/dev/msglog </dev/console
s1:1:respawn:/sbin/rc1 >/dev/msglog 2<>/dev/msglog </dev/console
s2:23:wait:/sbin/rc2 >/dev/msglog 2<>/dev/msglog </dev/console
s3:3:wait:/sbin/rc3 >/dev/msglog 2<>/dev/msglog </dev/console
s5:5:wait:/sbin/rc5 >/dev/msglog 2<>/dev/msglog </dev/console
s6:6:wait:/sbin/rc6 >/dev/msglog 2<>/dev/msglog </dev/console
fw:0:wait:/sbin/uadmin 2 0 >/dev/msglog 2<>/dev/msglog
</dev/console
of:5:wait:/sbin/uadmin 2 6 >/dev/msglog 2<>/dev/msglog </dev/console
rb:6:wait:/sbin/uadmin 2 1 >/dev/msglog 2<>/dev/msglog </dev/console
sc:234:respawn:/usr/lib/saf/sac -t 300
co:234:respawn:/usr/lib/saf/ttymon -g -h -p `uname -n` console login: -T Oracle -d
/dev/console -l console \
-m ldterm,ttcompat
The SMC is also started by an entry in /etc/inittab.

Exercise 3A: Run Levels


Create a table listing all of the run levels, and describe the purpose of one startup
script in each run level.
Use the eeprom command to display all of the default PROM variables for your
system. Check that the devices listed match those installed on your system.

Theme 3.2: Service Management Facility


In simple terms, the Service Management Facility (SMF) provides an easy way for
services to started and stopped using a common system. These services include databases,
web servers, application servers, and so on. Its important that various commands and
applications be configured in certain ways upon startup, and the SMF makes this easy.
Every service on the system is described by a Fault Management Resource Indicator
(FMRI). The sendmail services, which delivers email, has the FMRI:

svc:/network/smtp:sendmail


To manage services, the svcs command is used. Some common tasks include:
svcs a show all currently installed services
svcs p shows how processes are services are related
svcs d show debugging information
svcs l provide a long listing of data about FMRI
Individual services can be managed via their FMRI, including:
svcadm enable enable FMRI
svcadm disable disable FMRI
svcadm refresh re-read the FMRI configuration
svcadm restart start the FMRI again

Exercise 3B: SMC


Create a new service.
Check the status of the service.
Restart the service.


Theme 3.3: Monitoring System Access


Once a system is operational, its important to monitor system usage. Solaris provides
several ways to do this easily: the w, who and finger commands all provide variations on
the same theme of listing users and their logins. For details on what applications
individual users are executing, the ps command should be used.

The w command provides a detailed summary of the logins for users on the system, in
order of the terminal and type (pseudoterminals, for example) that are being used. The
displays prints a list of currently logged in users, their terminal number, their date of login,
and the number of days, hours and minutes that the terminal has been idle. In addition,
CPU usage data is displayed, along with the foreground command being executed
currently by the user. For example, the following system has four users (jbloggs, ssmith,
kjones and mdulce) connected:

$ w
3:35pm up 361 day(s), 9:32, 442 users, load average: 2.34, 2.13, 2.04
User tty login@ idle JCPU PCPU what
jbloggs pts/0 27Feb02 7:20 141:47 1:06 /bin/tcsh
ssmith pts/1 27Feb02 5:42 27:16 46 /bin/csh
kjones pts/2 27Feb02 46 51:45 43 /bin/bash
mdulce pts/3 27Feb02 23:26 55 /bin/ksh

In addition to user information, the w command displays a system load average for the
past minute (2.43), five minutes (2.13) and fifteen minutes (2.04) respectively. The normal
peak load for a uniprocessor system is 1.0, so the system is operating at 2.34 times its
capacity. In addition, the up time is displayed (361 days), and the number of logged in
users is also displayed (442). Thus, the w command provides a useful aggregation of user
and system data. If youre just interested in user data, then the who command may be
more useful:

$ who
jbloggs pts/0 Feb 27 07:27
ssmith pts/1 Feb 27 07:27
kjones pts/2 Feb 27 07:32
mdulce pts/3 Feb 27 07:32


Alternatively, if youd like extended information about users, then the finger command
may be used:

$ finger
Login Name TTY Idle When Where
jbloggs John Bloggs *pts/0 7:20 Wed 07:27 sydney
ssmith Sue Smith *pts/1 5:43 Wed 07:27 adelaide
kjones Keisha Jones *pts/2 5:43 Wed 07:38 canberra
mdulce Marina Dulce *pts/3 4:43 Wed 07:56 newyork

Here, the full name of each user is displayed along with their client hostname (sydney,
adelaide, canberra and newyork). In security terms, its often useful to verify that users are
logging in from where they are expected: obviously, if mdulce is stationed in New York,
then you wouldnt expect to see a login from Sydney. In this situation, it may be wise to
verify that the account is being accessed by the person authorized.

One way of authenticating logged-in users is to write to them in their logged-in terminal,
requesting an authentication token (like a birthdate), or a pre-recorded token, like what is
you dogs name?. The administrator would type the following to query mdulce:

sydney# write mdulce
Dear Marina,
I notice that you are logged in from Sydney and not New York.
Could you please type your authentication token now, or your session will be terminated in 5 minutes.
Sincerely,
The Administrator (root@sydney)
^d


If the users responds with their birthdate or the pre-arranged token, then their session can
continue, otherwise it may be necessary to terminate their session.


Exercise 3C: Monitoring System Access


Check that no unauthorized users are using your system by using the w, who and
finger commands.
Read the man page for the wall command. Practice sending alert messages to all
users.


Theme 3.4: User Management


Since Solaris is a multi-user system, every user on the system must have an account that is
denoted by a unique login identifier (the username). User data is stored in the password
database /etc/passwd. Solaris usernames have a maximum of eight characters. The main
authentication token for the login sequence is the password, which is stored in an
encrypted format in the shadow password database /etc/shadow. Historically, user and
password data was always stored in the password database. However, because all users on
the system can read the password database, a separate shadowed database was introduced
to prevent dictionary-based cracking attacks (the shadow password database is only
readable by root).

Users on the system have the ability to spawn processes from a shell that are protected
from others users thus, the user jbloggs cannot interfere with (i.e., send signals to) the
processes spawned by the user mdulce, unless the user jbloggs has super-user privileges.
Alternatively, as part of the advanced Role Based Access Control (RBAC) facility, the
user jbloggs may be granted authorization to send signals to mdulces processes. But
generally, the Solaris process model provides reliable user-based separation and protection
of data.

Users on the system must belong to one primary group, and may optionally belong to a
number of different groups. Group membership provides benefits in terms of resource
sharing, although processes must always be owned by a specific user and not just a group.
However, processes do have a group association which is the current group of the
executing user.

In addition to a user-based process model, Solaris also provides a user-based file access
permission model, where every file on a file system is owned by a specific user. A set of
file permissions, expressed as octal or symbolic codes, is used to protect files against
interception by unauthorized users. Permissions must be set for three classes of user: the
owner, the group and everyone else. The group in this case is the current group of the file
owner, typically the users primary group. There are three types of file permissions that
can be set: read, write and execute. Executable files, such as scripts, must have the
executable bit set (there are no file extensions that designate file types in Solaris).

Examining the password database provides some insight into the specific properties
assigned to system users:

# cat /etc/passwd
root:x:0:1:Super-User:/:/sbin/sh

daemon:x:1:1::/:
bin:x:2:2::/usr/bin:
sys:x:3:3::/:
adm:x:4:4:Admin:/var/adm:
lp:x:71:8:Line Printer Admin:/usr/spool/lp:
uucp:x:5:5:uucp Admin:/usr/lib/uucp:
nuucp:x:9:9:uucp Admin:/var/spool/uucppublic:/usr/lib/uucp/uucico
listen:x:37:4:Network Admin:/usr/net/nls:
nobody:x:60001:60001:Nobody:/:
noaccess:x:60002:60002:No Access User:/:
nobody4:x:65534:65534:SunOS 4.x Nobody:/:


Here, the database fields are delimited by the colon character, and represent the following
attributes for the root user in this example:

The username (root)

The password field (x, indicating that password shadowing is enabled)


The User ID (UID) field (0), which is unique on the system for each user.
The Group ID (GID) field (1), which the primary group for the user.
The comment field (Super-User) which contains the users full name.
The home directory (/) for the user.
The default shell for the user (/sbin/sh).

A similar structure is used by the group database (/etc/group) which defines all of the
group memberships on the system:

# cat /etc/group
root::0:root
other::1:
bin::2:root,bin,daemon
sys::3:root,bin,sys,adm
adm::4:root,adm,daemon
uucp::5:root,uucp
mail::6:root, pwatters
tty::7:root,tty,adm
lp::8:root,lp,adm
nuucp::9:root,nuucp
daemon::12:root,daemon

sysadmin::14:
nobody::60001:
noaccess::60002:
nogroup::65534:


Here, the database fields are again delimited by the colon character, with the following
attributes defined for the group mail:

The group name (mail).

The group password (none).


The Group ID (GID) which is unique for each group.
The comma delimited list of users (root and pwatters).


Exercise 3D: User Management


Make a list of users who have accounts on your system.
Read the man page for the find command. Make a list of all accounts that own no
files on the system.


Assignment 3.1: Changing run levels


Description
Students should practice switching between single and multi-user levels. Boot into single
user mode, note down the message concerning the root password.

Procedure
1. Shutdown your system using the shutdown command.
2. Boot the system into single user mode.
3. Make a note of the message concerning the root password.
4. Boot your system into multi user mode.

Assignment 3.2: Adding users


Description
Take a copy of the /etc/passwd file. Add a new user using the useradd command. Show the
new copy of the /etc/passwd file after it has been updated.

Procedure
1.
2.
3.
4.
5.

Login to your system as root.


Copy the file /etc/passwd to /etc/passwd.orig.
Add a new user called jbloggs to your system using the useradd command.
Copy the file /etc/passwd to /etc/passwd.new.
Use the diff command to display the differing line between /etc/passwd.orig and
/etc/passwd.new.

Module 4 - Security and Process Control


Overview
Processes allow jobs to be performed on a Solaris system. They provide the envelope for
executing system calls, functions and other routines from within an application. Every
program running on a Solaris system, including user shells, must run as a process. Thus,
its critical to understand how to work with and manage processes. Solaris provides tools
to display information about processes, and send signals to active processes instructing
them to terminate or restart. Process monitoring tools are an important operational aspect
of managing a Solaris system.

Linked with the concept of processes is security process security, file security and user
security. Every file on a Solaris file system has a permissions string associated with it,
allowing users, group members and all other users to read, write and execute files,
according to the permission string. From a security perspective, its important to
understand how file permissions can easily allow intruders access to a system if not set
appropriately. In addition, default file permissions and high-level access control lists
complement the standard UNIX file permission model on Solaris.

Learning Outcomes
Upon successful completion of this course, students will be able to:

State how to find regular expressions in files.
Define how to print or change directory and file permissions.
Explain the role of umask values in setting directory and file permissions.
List the procedures for using access control lists (ACLs).
Explain how to use the process management commands.
Describe the role of signaling in process management.
State the steps required to terminate processes.

Path to Complete the Module


For best results, you may wish to follow the course authors suggested path as outlined
below.

1.
2.
3.
4.
5.
6.
7.
8.
9.

Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: Regular Expressions.
Read Theme 2: File Security.
Read Theme 3: Access Control Lists.
Read Theme 4: Processes.
Read Theme 5: Signals.
Complete Assignment 4.1: Monitoring Processes.
Complete Assignment 4.2: File Permissions.
Complete Assignment 4.3: Access Control Lists.

Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.

Read Chapter 10 from Oracle Solaris Administration: Common Tasks


(https://docs.oracle.com/cd/E23824_01/html/821-1451/docinfo.html#scrolltoc).

Theme 4.1 - Regular Expressions


Regular expressions allow pattern matching operations to be performed on data streams in
Solaris. This allows applications, such as the stream editor (sed), to perform complex text
processing operations. A key tool for checking file contents quickly is the grep command
grep allows files to be searched by using a search term or regular expressions. By using
tools like sed and grep, the shell can be used to process large amounts of data using
regular expressions.

An example use of regular expressions is to modify a source string which occurs within a
file, by replacing it with a new target string. To identify occurrences of the source string in
a large text, such as Jane Austens Northanger Abbey, we can use the grep command to
search for the string Sarah:

$ grep Sarah nabby10.txt
Sally, or rather Sarah (for what young lady of common
Her father, mother, Sarah, George, and Harriet,
for all their indignation and wonder; though Sarah indeed
when he recollected this engagement, said Sarah,
was information on Sarahs side, which produced only a bow


Here, each line that contains the string Sarah is returned. If we wanted to change the
story, and replace the source string Sarah with the target string Aja, the following sed
command could be used:

$ sed s/Sarah/Aja/g nabby10.txt > nabby11.txt


Here, the regular expression s/Sarah/Aja/g is evaluated by sed against the contents of
nabby10.txt, replacing the source string Sarah with the target string Aja, and redirects
the output to the new file nabby11.txt. Now, if we repeat the grep command on the new
file nabby11.txt, the following output will be displayed:

$ grep Sarah nabby11.txt


Oops! Theres no output because every occurrence of Sarah in nabby11.txt has been
replaced by Aja. To display all of the lines containing Aja, the following command
can be used:

$ grep Aja nabby11.txt


Sally, or rather Aja (for what young lady of common
Her father, mother, Aja, George, and Harriet,
for all their indignation and wonder; though Aja indeed
when he recollected this engagement, said Aja,
was information on Ajas side, which produced only a bow


More information regarding sed and regular expressions can be obtained from the sed
FAQ http://sed.sourceforge.net/sedfaq.html

Exercise 4A: Regular Expressions


Using sed, create a script that reads in a text file and prints a left hand margin of 4
space characters suitable for printing.
Using sed, search for a string and remove it.

Theme 4.2 - File Security


All files stored on a Solaris file system must be owned by a specific user, and have an
association with a specific group. The group assigned to the file by default is the current
group of the user who created the file. A files ownership can be changed by the super
user, while a user who owns a file can change the group association.

Access to files is governed by a set of file access permissions which can be set a files
owner, or by the super user. There are three user classes covered by the file permissions
structure:

Owners
Group members
Other users

In addition, there are three types of permissions that can be granted to each class of user:

Read
Write
Execute

Each of these permissions can be set individually by using symbolic codes, or in absolute
terns by using octal codes. For example, to set read permissions on a file called
database.txt for the group associated with the file, the following command could be used:

# chmod g+r database.txt


To remove the permissions, the following command could be used:

# chmod g-r database.txt


In terms of user classes, users are designated by u, group members by g, and all other
users by o. File permissions can be set for reading by r, writing by w and execution
by x. Since Solaris does not support file extensions to indicate executable status, all
executable files (including scripts and binaries) must have the executable bit set for the
user class that has permission to execute it.

Lets look at examples of file permissions by using the ls command that lists files. If you
pass the l option to ls, you will be able to display a list of file permissions for the current
directory:

$ ls -l
total 1428
-rwx 1 pwatters phd 712808 Sep 23 2001 test*
-rw- 1 pwatters phd 216 Sep 23 2001 test.cpp


Here, we can see that 1,428 x 512 byte blocks of data are stored in the directory, which
contains two files: an executable file called test, and a C++ source file called test.cpp. In
both cases, the user has read and write permission. However, for the test executable, the
user also has executable permissions. No other users have any permission to access the file
(apart from the super-user).

If we wished members of the group phd to have read access to the files, we could use the
following command to grant it:

$ chmod g+r *
ls -l
total 1428
-rwxr 1 pwatters phd 712808 Sep 23 2001 test*
-rw-r 1 pwatters phd 216 Sep 23 2001 test.cpp


Notice that a new r has been added to the fifth column this indicates group read
permissions. Lets add read permissions for all users, and examine the result:

$ chmod o+r *
ls -l
total 1428
-rwxrr 1 pwatters phd 712808 Sep 23 2001 test*
-rw-rr 1 pwatters phd 216 Sep 23 2001 test.cpp


Note that the eight column now has read permissions indicated for all users. There are ten
columns in total, which represent the following:

File type (e.g., d for directories)


Read for owner
Write for owner
Execute for owner
Read for group
Write for group
Execute for group
Read for all
Write for all
Execute for all

Exercise 4B:File Security


Create a new empty file using the touch command. Practice changing file
permissions using symbolic codes. Compare the output of ls l for each change that
you make, ensuring that you understand what each symbol means.

Theme 4.3: Access Control Lists


Although standard Solaris file permissions allow users to protect their files against access
by those outside their groups, the process of creating groups requires super-user privileges.
Thus, if you need to grant access to a user who is not in the same group as you, you would
need to make the file world-readable, which is not really desirable. This is where file
Access Control Lists (ACLs) come into play they allow you to set read, write and
execute permissions on a file for a specific user, irrespective of that users relationship to
you in terms of a group membership. ACLs are the best way to ensure the security of files
which are not intended for wide distribution.

To set an ACL, the setfacl command can be used. For example, to set read-write access for
the user dmacbeth on the file database.txt, the following command would be used:

$ setfacl m user:dmacbeth:rw-database.txt


A + symbol at the end of the permissions string then signifies that an ACL has been set on
a file as shown in the following ls display:

# ls -l /usr/local/db/database.txt
-rw-+ 1 root sys 6876454 Apr 24 11:43

Exercise 4C: Access Control Lists


Create a new empty file using the touch command. Create three access control lists
for three different users for the read, write and execute permissions. Compare and
contrast the ls-l output for the file after applying each access control.
Practice removing access control lists.


Theme 4.4: Processes


A process is a discrete job that is executed by a user and is identified by a unique Process
ID (PID). When a user executes a process, no unprivileged users may interfere with that
process they own the process, much like a user owns a file. Note that there is no
concept of process access permissions allowing group members or other users to
communicate with a users processes, although processes are associated with the GID of
the executing user.

Single Solaris supports multiple concurrent users, many different users can execute
processes at the same time. In addition, each process can spawn a number of lightweight
processes (or threads). The use of threads minimizes the overhead associated with creating
and killing processes which incurs a relatively large overhead compared to threads. Users
interact with their processes by sending signals through a programming API or directly on
the command line by using the kill command.

One of the best features of the process model is the ability for the super-user to assign
execution priorities to each process on the system. Thus, more urgent tasks can be granted
priority over less urgent tasks. In addition, multiprocessor systems can allocate one or
more processes to execute a single process or set of processes.

The list of processes running on a system is visible to all users and can be generated by
using the ps command. By default, the ps command only shows the processes for the
currently logged-in user, as shown in the following example:

$ ps
PID TTY TIME CMD
26923 pts/8 0:00 tcsh
26934 pts/8 0:00 newmail


In this example, the user has two processes running (26923 and 26934), both spawned
from terminal 8, and which have both executed minimal amounts of CPU time. The
applications running are the Cornell shell (tcsh) and the newmail command the latter is
running in the foreground, while the latter is running in the background.

The ps command has many options. For example, to display a list of all processes running
on a system., the ps A command can be used as follows:

$ ps -A
PID TTY TIME CMD
0 ? 0:13 sched
1 ? 0:50 init
2 ? 0:03 pageout
3 ? 250:35 fsflush
562 ? 0:00 sac
345 ? 0:01 xntpd
255 ? 0:00 lockd
62 ? 0:00 sysevent
64 ? 0:00 sysevent
374 ? 0:00 dptelog
511 ? 0:00 keyserv
291 ? 0:11 cron
212 ? 0:00 in.ndpd
336 ? 0:17 utmpd


To display a full listing for all processes, the ps Af command can be used:

$ ps -Af
UID PID PPID C STIME TTY TIME CMD
root 0 0 0 Apr 11 ? 0:13 sched
root 1 0 0 Apr 11 ? 0:50 /etc/init root 2 0 0 Apr 11 ? 0:03 pageout
root 3 0 0 Apr 11 ? 250:35 fsflush
root 562 1 0 Apr 11 ? 0:00 /usr/lib/saf/sac -t 300
root 345 1 0 Apr 11 ? 0:01 /usr/lib/inet/xntpd
root 255 1 0 Apr 11 ? 0:00 /usr/lib/nfs/lockd
root 62 1 0 Apr 11 ? 0:00 /usr/lib/sysevent/syseventd
root 64 1 0 Apr 11 ? 0:00 /usr/lib/sysevent/syseventconfd
root 374 1 0 Apr 11 ? 0:00 /opt/ORACLEWhwrdg/dptelog
root 511 1 0 Apr 11 ? 0:00 /usr/sbin/keyserv
root 291 1 0 Apr 11 ? 0:11 /usr/sbin/cron
root 212 1 0 Apr 11 ? 0:00 /usr/lib/inet/in.ndpd
root 336 1 0 Apr 11 ? 0:17 /usr/lib/utmpd


Here, we can see the command names associated with each of the processes being

executed. An alternative, interactive view of process activity is provided by the top


command:

last pid: 4348; load averages: 1.28, 1.20, 1.21 15:40:11
344 processes: 333 sleeping, 8 zombie, 1 stopped, 2 on cpu
CPU states: 54.6% idle, 22.3% user, 17.4% kernel, 5.7% iowait, 0.0% swap
Memory: 2048M real, 1158M free, 1035M swap in use, 11G swap free

PID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND
20890 jdoe 1 0 19 1208K 864K cpu/2 458.5H 24.89% a.out
4266 pwatters 1 52 0 3128K 2272K cpu/1 0:00 0.58% top
4321 jjfrost 1 60 0 2800K 2216K sleep 0:00 0.15% imapd
307 root 39 52 0 17M 9648K sleep 21:24 0.07% nscd
572 dnscache 1 58 0 3064K 2336K sleep 10:49 0.04% dnscache
4155 jbloggs 1 58 0 2792K 2200K sleep 0:00 0.04% imapd
4153 jbloggs 1 52 0 2792K 2200K sleep 0:00 0.03% imapd
573 dnslog 1 58 0 1024K 704K sleep 4:34 0.02% multilog
569 root 16 59 0 110M 76M sleep 45:09 0.02% squid
290 root 23 58 0 5320K 2576K sleep 10:06 0.02% syslogd
18917 root 1 59 0 3576K 2440K sleep 0:06 0.02% sshd
19163 root 1 49 0 5504K 3296K sleep 0:01 0.02% smbd
4237 jtintern 1 54 0 2792K 2184K sleep 0:00 0.02% imapd
4154 jbloggs 1 60 0 2800K 2216K sleep 0:00 0.02% imapd
559 root 1 58 0 2832K 1344K sleep 12:04 0.02% sshd


Extra columns in the top command include THR (number of threads spawned by a
process), PRI (process priority), NICE (process nice value), SIZE (process size), RES
(amount of application data resident in memory), and STATE (run state or sleep). A
summary of system load data is also provided, including CPU and memory load. To view
the status of all CPUs installed on a system, the psrinfo command can be used:

$ psrinfo
0 on-line since 04/11/02 04:09:58
1 on-line since 04/11/02 04:09:59
2 on-line since 04/11/02 04:09:59
3 on-line since 04/11/02 04:09:59

Exercise 4D: Processes


Using the grep and awk commands, practice extracting specific columns of data
from the ps command to produce status reports.
Using the top command, record the CPU load values for ten minutes and create a
graph showing the performance profile. Startup a CPU-intensive application like
Oracle and compare the results.

Theme 4.5: Signals


A signal is a message that is sent to an active process by the process owner. The two most
commonly used signals sent from the shell are SIGHUP, which typically causes a program
to re-read its configuration file, and SIGKILL, which forces a process to terminate. A list
of all the supported Solaris signals is shown below:

Signal

Code

Action

Description

SIGHUP

Exit

Hangup

SIGINT

Exit

Interrupt

SIGQUIT

Core

Quit

SIGILL

Core

Illegal Instruction

SIGTRAP

Core

Trace or Breakpoint
Trap

SIGABRT

Core

Abort

SIGEMT

Core

Emulation Trap

SIGFPE

Core

Arithmetic Exception

SIGKILL

Exit

Killed

SIGBUS

10

Core

Bus Error

SIGSEGV

11

Core

Segmentation Fault

SIGSYS

12

Core

Bad System Call

SIGPIPE

13

Exit

Broken Pipe

SIGALRM 14

Exit

Alarm Clock

SIGTERM 15

Exit

Terminated


Its also possible to manage processes in the shell by using job management. This involves
running one process on the foreground, to which standard input is streamed, and any
number of jobs running in the background. If input needs to be entered on standard input

for an application running in the background, it must be bought into the foreground first.

Lets look at an example. Imagine that you are running the Bourne again shell in the
foreground. You then start an application called firewall which executes in the
foreground, until it is suspended by pressing CTRL+z. When a process is suspended, it
does not continue execution it merely waits to be killed or to be resumed. To resume
execution in the background, the bg command must be used. To bring the application into
the foreground, the fg command must be used. If a number of processes are in the
background, then the job number (which is shown enclosed within square brackets when
the job is sent into the background) must be supplied. The following example shows the
firewall application being started in the foreground, suspended, sent into the background,
another command (ls) being performed in the foreground, and the suspended job being
bought back into the foreground:

$ firewall
^z
Suspended
$ bg
[1] firewall &
$ ls /home/pwatters
database.txt secret.txt
$ fg


Its important to remember that once the shell that spawned the original application has
exited, it is not possible to bring a background job into the foreground.

Exercise 4E: Signals


Startup an application in the background, and send a kill signal to it. Verify, using
the ps command, that the process is no longer running.
Comment out a service in the Internet super server (inetd) configuration file
(/etc/inetd.conf). Send a hangup signal to the inetd process, and check that the
service is no longer available.


Assignment 4.1: Monitoring Processes


Description
Students should use the ps command to identify a set of processes that belong to
themselves. Kill one process, and e-mail the output to your instructor, including the ps
command and kill command output.

Procedure
1. Login to your system as root.
2. Use the ps command to identify a set of processes that belong to the root user
using the grep command.
3. Identify a process that can be safely killed.
4. Kill the process by sending a SIGKILL signal and the kill command.

Assignment 4.2: File Permissions


Description
Students should create a file called /tmp/test.txt and print its file permissions by using the
ls command. Students should set the following permissions: user read-only; user readwrite; user read-write-execute; user-execute, group-execute; all-read, none-write.

Procedure
1.
2.
3.
4.

Login to your system as root.


Create a file called /tmp/test.txt.
Print the files permissions by using the ls command.
Set the following permissions: user read-only; user read-only; user read-write;
user read-write-execute; user-execute, group-execute; all-read, none-write.
5. Print the files permissions again by using the ls command.

Assignment 4.3: Access Control Lists


Description
Set an ACL for another user on the system with full permissions for the /tmp/test.txt file.
Use ls to display its permissions.

Procedure
1.
2.
3.
4.

Login to your system as root.


Create a file called /tmp/test.txt.
Using setfacl, grant full access permissions to the file.
Print the files permissions by using the ls command.

Module 5 - Files, Directories and File Systems


Overview
A Solaris system persists data through the use of files which are located in directories that
are physically stored on a disk device, such as a hard disk, CD-ROM disc or DVD-ROM
disc. Solaris supports many different file system types, such as the UNIX File System
(UFS), FAT File System (PCFS) and the System V File System (s5fs). Once a disk has
been prepared for use through formatting, it can be mounted on a specific mount point by
(such as /usr) by using the mount command. This module introduces the main file system
types and how they can be mounted using Solaris. In addition, the purpose of the major
top-level directories in Solaris will also be reviewed.

Solaris provides a number of different tools which operate on files and file systems,
including the volume manager, that allows floppy disks and CD-ROM discs to be mounted
and unmounted by unprivileged users. In addition, a number of compression programs can
be applied to individual files to increase the amount of space available for other
applications. In this module, we will review all of the standard Solaris tools that perform
file operations.

Learning Outcomes
Upon successful completion of this course, students will be able to:

Describe the purpose of the main system directories (/home, /etc, /opt, /usr, /export,
/).
Describe the available file system types supported by Solaris.
List the options used with the mount command and describe their purpose.
State the purpose of the /etc/mnttab and /etc/vfstab files.
Explain how the removeable media volume manager works with floppy disks and
CD-ROMs.
List the steps required to compress a file.
Describe the purpose of regular files, directories, symbolic links, device files, and
hard links on a Solaris file system.

Path to Complete the Module


For best results, you may wish to follow the course authors suggested path as outlined
below.

1.
2.
3.
4.
5.
6.
7.
8.
9.

Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: File System Overview.
Read Theme 2: File Types and Permissions.
Read Theme 3: Mounting File Systems.
Read Theme 4: Volume Manager.
Read Theme 5: File Compression.
Complete Assignment 5.1: File System Features.
Complete Assignment 5.2: Copying Files.
Complete Assignment 5.3: Symbolic Links.

<READINGS>

Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.

Oracle Solaris Administration: Common Tasks, Chapter 13


(https://docs.oracle.com/cd/E23824_01/html/821-1451/docinfo.html#scrolltoc)

Theme 5.1: File System Overview


Most data on Solaris systems is physically stored on hard disks. These disks, and all other
hardware devices, are represented on the system by using a set of files. Physical device
files are stored in the /devices directory, and form a hierarchical tree mapping the set of
buses and peripheral devices attached to the system. For example,
/sbus@1f,0/ORACLEW,fas@2,8800000/sd@1,0 is a physical device name for a disk.
Logical device files are stored in the /dev directory, and provide a method for addressing
devices that differs from the physical files. For example, the raw and block devices for a
disk partition would be /dev/rdsk/c0t0d0s7 and /dev/dsk/c0t0d0s7 respectively. Generally,
if you are performing actions of disks within the operating system environment, you
would use logical device names. If you are working with the OpenBoot PROM monitor,
you might need to use physical device names. Man pages for disk related command should
state whether logical or physical device references should be used.

Disk Devices
Physical device names typically identify the bus to which a device is attached, its address
and any arguments, while logical device names refer to more specific features. The
address has the form drv@addr:args where drv is a driver name, addr is a device
address, and args are any device arguments. In the case of hard disks, logical device
names include the slices which map to physical disk partitions, although must
administrators refer to slices and their underlying partitions interchangeably. When the
system performs a reconfiguration reboot, the entries in the /devices and /dev directories
are recreated to reflect any changes in the systems hardware, such as a new disk. Entries
in the /etc/minor_perm file determine how file permissions should be applied to any new
filesystems created. For example, the entry sd:* 0666 root wheel specifies that that sd
disk nodes should have the octal permissions 666, owner root and group wheel.

The /etc/path_to_inst file associates each physical device with a logical device on the
system, as the mapping cannot be determined automatically. The following example
shows an entry for the SBUS of a SPARC system and a disk attached to it:

/sbus@1,0 1 sbus
/sbus@1,0/ORACLEW,fas@3,8800000 0 fas
/sbus@1,0/ORACLEW,fas@3,8800000/sd@4,0 34 sd
/sbus@1,0/ORACLEW,fas@3,8800000/sd@0,4 273 sd
/sbus@1,0/ORACLEW,fas@3,8800000/sd@1,5 281 sd
/sbus@1,0/ORACLEW,fas@3,8800000/sd@2,6 289 sd
/sbus@1,0/ORACLEW,fas@3,8800000/sd@3,7 297 sd
/sbus@1,0/ORACLEW,fas@3,8800000/sd@5,1 305 sd


The prtconf command can also be used to display disk device information. For an Ultra 5
system that has an IDE disk installed, the following display shows the details of the disk:

# prtconf
System Configuration: Oracle Microsystems Oracle4u
Memory size: 128 Megabytes

System Peripherals (Software Nodes):


ORACLEW,Ultra-5_10

pci, instance #0

pci, instance #0
ide, instance #0
disk
cdrom
dad, instance #0
sd, instance #30

File Systems
Solaris file systems always have two devices defined for communication with
applications: a raw device, stored in the /dev/rdsk directory, that is designed for low-level
operations, and a block device, that is intended high-level operations, including buffered
reading and writing of data. Whether referred to by their raw or block device names, file
systems have four characteristics that are combined to form a file system name:

controller (c)
target (t)
disk (d)
slice (s)

An example file system is /dev/dsk/c0t0d1s5, which can be read as controller 0, target 0,
disk 1, slice 5. By using such a complex nomenclature, a large number of disk controllers
and SCSI buses can be supported. For example, a Oracle Enterprise 450 has 20 SCSI disk
bays supported by multiple controllers. Thus, if one controller breaks down, the other can
be used to immediately take its place, if used within a Redundant Array of Inexpensive
Disks (RAID). RAID technology allows a further abstraction of disk devices, referred to
as meta-disks, that allows large, virtual file systems to be constructed from smaller ones
(striping), or for disks to be made fully redundant with each other (mirroring).

The default file system type for Solaris systems is the UNIX File System (UFS). UFS file
systems have four key components:

a boot block
super blocks
inodes
disk blocks

The boot block of a file system is used to store all data relating to booting a system. If a
file system has a valid boot block, then the operating system may be booted from it. A
system without a boot block on at least one file system cannot be booted from the installed
file systems. However, a boot block can be installed manually if necessary. Super blocks
store key file system data, including the size of the file system, the location of inodes, and
the number of disk blocks available. The inodes store information about the file stored on
the file system, while the disk blocks actually store the data.

In general, a Solaris file system is laid out in the following way:


Slice 0 - / partition
Slice 1 virtual RAM
Slice 2 whole disk
Slice 3 - /export
Slice 4 swap space

Slice 5 - /opt
Slice 6 - /usr
Slice 7 - /export/home

However, if you wanted to use only one slice to store all data, such as Slice 6 for /usr files,
this is acceptable. Also, other partition names, such as /data, could also be used on any
slice. The exception is Slice 2, which shouldnt be used to directly store any file systems,
since it refers to the whole disk.

Exercise 5A: File System Overview


Use the df command to show the mounted volumes on your system. Check that none
are nearing 100% complete.
Use the prtconf command to make a list of the devices attached to your system.

Theme 5.2: File Types


There are four main file types supported under Solaris:

regular files
directories
symbolic links
hard links

A regular file is used to directly store data that is designed to be directly retrieved. The
maximum file size can be measured in gigabytes much larger than most applications will
require. Directories are special files that form the basis of the hierarchical file system.
They allow the file system to be divided into logical entries, starting from the root
directory /, with second-level directories such as /etc, /usr, /home etc. The process
of hierarchically creating directories is only limited by the capacity of the disk any
number of subdirectories can be created under /, /usr, etc. For example, the /etc
parent directory may have the child directories /etc/system, /etc/security and
/etc/rc2.d. Each of these directories may also contain their own child directories.

Inside each directory are two files that you should be aware of: the dot file . and the dotdot file ... The dot file always refers to the current working directory, while the dot-dot
file refers to the parent directory of the current working directory. Thus, to execute a file
called test.sh located in the current working directory, the following command could be
used to construct a relative path:

$ ./test.sh


When expanded to an absolute path, the effective command might look like this,
depending on the current working directory:

$ /home/james/scripts/test.sh


However, if you wanted to execute a script in the parent directory of the current working
directory, the dot-dot notation should be used:

$ ../test.sh

When expanded to an absolute path, the effective command would look like this:

$ /home/james/ test.sh


Using relative paths in this way is very useful in scripts and when working on the
command-line because the absolute path does not need to be entered or even known in
advance.

Symbolic links and hard links are used to create references to files. A hard link is a direct
pointer to a file that is equivalent to the original file, while a symbolic link simply creates
an indirect relationship between the link and the original file. Symbolic links are
commonly used to create references to directories and files that lie on different file
systems. Hard links can only be used within the same file system. An example of using
symbolic links is when a use wants to create a reference to a directory of test data that
resides on a file system with a long directory path, such as
/home/jimmy/data/base/dna/nucleotides it is simply easier for the user tara to create a
symbolic link in her own home directory to this directory, by using the following
command:

$ ln s /home/jimmy/data/base/dna/nucleotides /home/tara/nucleotides


The user tara may now cd to the nucleotides directory and back to her home directory as
parent, since the symbolic link is always relative. Note that the link name can be different
to its referent: thus, the following link would be equally as valid:

$ ln s /home/jimmy/data/base/dna/nucleotides /home/tara/nucs


Performing a ls on this link would show the following entry:

$ ls -l /home/tara/nucs
lrwxrwxrwx 1 tara 1 May 2 11:23 nucs -> /home/jimmy/data/base/dna/nucleotides

Exercise 5B:File Types and Permissions



Summarize the role of symbolic and hard links.
Read the man page for ln. Are there any restrictions on creating symbolic links?

Theme 5.3: Mounting File Systems


Under Solaris, file systems must be mounted on a mount point underneath the root
directory, or one of its descendants. Effectively, this means that you must create a
directory somewhere on a mounted file system (preferably the root file system) from
which the file system can be referred. For example, if the file system /dev/dsk/c0t0d0s6 is
to be mounted on /data, then the directory /data must be created first by using the
mkdir command as shown below:

# mkdir /data
# mount /dev/dsk/c0t0d0s6 /data


Once mounted, the new file system can be referred to in the same way as any other file
system. To ensure that the file system is automatically mounted at boot time, an entry must
be made in the /etc/vfstab file, as shown below:

#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
/dev/dsk/c0t0d0s6 /dev/rdsk/c0t0d0s6 /data ufs 2 yes


When a system is shutdown, file systems are automatically unmounted. However, if you
want to perform maintenance on a disk that hosts a file system, then it can be manually
unmounted. The following command unmounts the /data file system:

# umount /data


The mount command makes a number of assumptions about a file system to be mounted,
including the fact that it is a UNIX File System (UFS). To modify these assumptions for
different situations, a number of options can be passed to the mount command, as shown
in the following summary:

bg: continues to attempt mounting in the background in the original attempt fails.
hard: continually sends requests to mount.
intr: permits keyboard interrupts while mounting.
largefiles: enables support for 2G+ file systems.
logging: creates a log of all file system transactions so that they any lost transactions
can be recovered if a file system fails.
noatime: stops timestamping on new files.
remount: allows a soft remount to be performed.

retry: determines how many times a retry is performed.


ro: allows the file system to be mounted read/only.
rw: allows the file system to be mounted read/write.
suid: allows setUID applications to be executed on the file system.

Exercise 5C: Mounting File Systems


Read the man page for mount. Write down what the option soft means.
Write a mount command for a file system /dev/dsk/c0t0d0s5 mounted on /usr with a
read-only option.
Write a mount command for a file system /dev/dsk/c0t0d0s6 mounted on /export
with an option to prevent timestamping of files.

Theme 5.4: Volume Manager


Solaris has support for removable devices such as floppy disks and CD-ROMs. In
previous versions, only the super-user was permitted to eject floppy disks and CD-ROMs,
since normal users were not viewed as trusted. However, recent Solaris versions have
relaxed this restriction, making it easier for unprivileged workstation users to have access
to their own local devices, without having to be granted root privileges. When a normal
user interacts with the volume manager daemon (vold), by using the eject or volcheck
commands, their effective user ID is set to 0. While this is a potential security risk, there
are no known back doors in either application which would allow a root shell to be
spawned, for example. No special authentication is required to use the volcheck or eject
commands, other than being an authenticated user.

To mount a floppy disk, it can be inserted into the workstations drive, and the volcheck
command executed:

$ volcheck


The volcheck command will also mount any CD-ROMs that are in the CD-ROM drive,
making them available through the CDE file manager.

Once youve finished using a mounted floppy, you can use the eject command to eject the
floppy disk:

$ eject

Exercise 5D: Volume Manager


Read the man page for eject. List the devices that can be used with eject.
Read the man page for volcheck. List any restrictions on using volcheck with
removable devices.

Theme 5.5: File Compression


File compression used to be an important space management task for system
administrators. However, with the development of disks of very large capacity, it has
assumed a less important role. Basically, information theory provides techniques for
exploiting redundancies within files, allowing the number of data elements stored to be
reduced. For example, image files contain a lot of redundant data because many of the
textures, such as the sky, are represented by exactly the same pixel values. Thus, a file can
be reduced in size by recoding it to exploit these redundancies.

A number of file compression programs are available on Solaris, each having its own
compression algorithm. Some algorithms are more suited to certain types of files, but
generally, the GNU gzip program achieves excellent compression ratios, which are
computed by comparing the number of bytes in the original file with the number of bytes
in the compressed file. Thus, a compression ratio of 2:1 means that only half the number
of disk blocks is required to store the file when compressed.

To compress a file using the standard UNIX compression program, simply pass the name
of the file to be compressed on the command-line:

$ compress file.txt


This command would create a compressed file called file.txt.Z, and remove the original
file.txt. To retrieve the files contents, the following command would be used:

$ uncompress file.txt.Z


This command would recreate the file file.txt, and delete file.txt.Z. Using gzip, which
generally gives higher compression ratios than compress, the process is similar:

$ gzip file.txt


This command would create a gzip compressed file called file.txt.gz, and remove the
original file.txt. To retrieve the files contents, the following command would be used:

$ gzip -d file.txt.gz

This command would recreate file.txt, and delete file.txt.gz. When using compression,
keep in mind that the process of compressing and uncompressing data is CPU-intensive,
and for large files, might increase the system load substantially. The overheard increases
when repeat file packing is used this is a strategy designed for achieving optimal
compression ratios, where the compressed file is iteratively compressed, to exploit
redundancies in the compressed version of the file. To perform repeat packing using gzip,
the following command could be used:

$ gzip 9 file.txt

Exercise 5E: File Compression


Make three copies of the file /etc/path_to_inst in the /tmp directory called p1, p2,
and p3. Compress p2 using compress and compress p3 using gzip. Use ls to generate
a file listing of all three files, and compute the compression ratio between each file.

Assignment 5.1: File System Features


Description
Write a 1,000 word summary of the file systems used by Solaris. List their key
advantages, such as journaling, and compare with Linux and Windows file systems, and email it to your instructor.

Procedure
1.
2.
3.
4.

Read the readings for this module.


List the file system types used by Solaris.
List their key advantages.
Perform an Internet search to identify file systems used by other operating systems,
such as Linux and Windows.
5. Summarize the features of each file system type.

Assignment 5.2: Copying Files


Description
Make a copy of three files from the /usr in directory in /tmp. Check their file sizes using
ls. Compress all three files by using compress and gzip and check their file sizes again.
Save the output from each.

Procedure
1. Login to your system as root.
2. Make a copy of three files from the /usr in directory in /tmp, such as /etc/passwd,
/etc/group and /etc/shadow.
3. Check their file sizes using ls.
4. Compress all three files by using compress and then gzip.
5. Check their file sizes using ls again.
6. Save the ls output.

Assignment 5.3: Symbolic Links


Description
Create a symbolic link and a hard link to the /usr/bin/ls command and perform a file
listing using ls and e-mail it to your instructor.

Procedure
1.
2.
3.
4.
5.

Login to your system.


In your home directory, create a symbolic link to the /usr/bin/ls command.
List the file permissions by using ls.
In your home directory, create a hard link to the /usr/bin/ls command.
List the file permissions by using ls.

Module 6 - Booting and Disk Configuration


Overview
The booting process of a Solaris can be quite complex, since literally hundreds of services
can be started. This requires efficient use of CPU time and advanced memory
management. We discuss both of these issues with respect to the Solaris boot process, and
the various boot and shutdown commands that can be used to manage a Solaris system.

Before disks can be used to host file systems, as discussed in the previous module, they
need to be physically added to the system. This operation can either be performed while
the system has been powered down, or in real-time by using the correct command
sequence. This high availability option is one of the best features of Solaris in a
production environment, since downtime should never occur. We will examine disk
procedures closely, in addition to examining the different types of disk device which map
physical disk characteristics to logical system entities.

Learning Outcomes
Upon successful completion of this course, students will be able to:

Describe the different ways that a system can be booted.
List all of the Solaris run levels.
Explain how to change init states.
Explain how to perform a reconfiguration reboot.
State the relationship between raw and block disk devices.

Path to Complete the Module


For best results, you may wish to follow the course authors suggested path as outlined
below.

1.
2.
3.
4.
5.
6.
7.
8.

Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: Booting.
Read Theme 2: Making File Systems.
Read Theme 3: Monitoring File Systems.
Read Theme 4: Repairing File Systems.
Complete Assignment 6.1: Disk Volumes.
Complete Assignment 6.2: System Configuration.
Complete Assignment 6.3: Raw and Block Devices.

Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.

Oracle Solaris Administration: Common Tasks, Chapter 11


(https://docs.oracle.com/cd/E23824_01/html/821-1451/docinfo.html#scrolltoc)

Theme 6.1: Booting


Booting is the literally the bootstrapping process of bringing up a system from its
firmware state to a fully-functioning multi-user system. The boot process is not a matter of
just executing the kernel there are many tasks which must be performed, including:

Checking the amount of RAM installed in the system
Reading the MAC (hardware) address from the primary network interface
Identifying the boot device
Loading the kernel
Configuring the primary network interface
Performing consistency checks on file systems
File systems are mounted
Executing RPC services
Setting up Internet gateways
Starting system daemons, such as the system log (syslog)
Starting third-party daemons, such as webservers
Loading Gnome

These tasks can be through the SMC. The Solaris installation program configures most of
the system-side tasks for you, although you will need to add your own startup scripts for
third-party applications, such as webservers. The following sample output shows an Ultra
10 system starting up:

ok boot
UltraSPARC 10, Type 5 Keyboard
ROM Rev. 3.1, 256 MB memory installed, Serial #123456
Ethernet address 3:4:2a:c:22:4f HostID 123456
Rebooting with command:
Boot device: /iommu@f,e0000000/sbus@f,e0001000/espdma@f,400000/esp@f,8
SunOS Release 5.11 Version Generic 64-bit
Copyright (c) 1983-2016 by Oracle Microsystems, Inc.
configuring IPv4 interfaces: hme0.
Hostname: johnson
The system is coming up. Please wait.
checking ufs filesystems
/dev/rdsk/c0t0d0s1: is clean.
NIS domainname is Cassowary.Net.
starting rpc services: rpcbind keyserv ypbind done.
Setting netmask of hme0 to 255.255.255.0

Setting default IPv4 interface for multicast: add net 224.0/4:


gateway johnson
syslog service starting.
Print services started.
volume management starting.
Webserver starting.
The system is ready.
johnson console login:


When a new hardware device is added to the system, a reconfiguration boot must be
performed, so that the appropriate physical and logical device files can be created in the
/devices and /dev directories respectively. This can be performed in one of two ways
from the OpenBoot PROM monitor, or from a root shell. Using the OpenBoot method, the
-r option is simply passed at the ok prompt with the boot command:

ok boot r


Alternatively, from a root shell, the following command can be executed:

# sync; touch /reconfigure; init 6


This command will synchronize disk data, and perform a configuration reboot.

Exercise 6A: Booting


Check SMC entries for all system-provided and third-party services. Are there any
options which you can change to improve performance or reliability?


Theme 6.2: Making File Systems


File systems can be accessed by using their raw or block devices, which are stored in the
/dev/rdsk and /dev/dsk directories respectively. Raw disk devices are used to operate at a
low level on a disk, while block devices are used at a higher level, where buffers and other
enhancements are required. One example where raw devices are used in preference to
block devices is the creation of a new file system using either the newfs or mkfs
command.

The newfs command is a UFS-specific version of mkfs, which can create file systems of
several different types, and with several different options. In the example below, the newfs
command is used to create a new file system on /dev/rdsk/c0t0d0s5:

# newfs /dev/rdsk/c0t0d0s5


This command is equivalent to the following mkfs command:

# mkfs F ufs /dev/rdsk/c0t0d0s5


The newfs command has the following options:

-a q: reserves q blocks to be substituted for bad blocks.
-b q: specifies the size of file system blocks to be q bytes.
-c q: provides q cylinders for individual cylinder groups.
-C q: sets q as peak contiguous disk block count for each file.
-d q: specifies the rotational delay to be q milliseconds.
-f q: specifies the minimum size (q bytes) for an individual file disk fragment.
-i q: sets q bytes aside for each inode.
-m q: sets aside q% of the physical filesystem as a reserve.
-n q: specifies group cylinder rotation number to q.
-r q: specifies peak disk RPM to q.
-s q: specifies the disk size as q sectors.
-t q: sets q tracks aside for each cylinder

The mkfs command can be used to create file systems of the following types:

ufs UNIX file system

udfs Universal disk file system

pcfs MS-DOS file system



Being able to create a pcfs file system is useful when creating file systems on floppy disks
that are to be shared between a PC and a Oracle system. For example, the following
command will create a new MS-DOS file system on the local floppy disk:

# mkfs F pcfs /dev/rdiskette

Exercise 6B:Making File Systems


Devise a newfs command string that would create a UFS file system on
/dev/rdsk/c0t0d0s5 with a peak disk RPM of 10,000.
Devise a newfs command string that would create a UFS file system on
/dev/rdsk/c0t0d0s5 with 1,024 bytes allocated to each inode.
Devise a newfs command string that would create a UFS file system on
/dev/rdsk/c0t0d0s5 with 5% of the file system set aside as a reserve.


Theme 6.3: Monitoring File Systems


An important task of a Solaris system administrator is monitoring the capacity of mounted
file systems to ensure that they are not filled to capacity before being expanded in some
way. System critical applications may fail if certain file systems are completely filled, and
data cannot be written to a drive. The result is sometimes a hung system, with a message
repeated hundreds of times on the console that a file system is full. To avoid this problem,
administrators should use the df command to display the amount of free disk space on all
mounted file systems. The df command display looks like this:

# df
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0t0d0s0 1048576 524288 524288 50% /
/dev/dsk/c0t0d0s4 349525 249525 100000 72% /usr
/proc 0 0 0 0% /proc
/dev/dsk/c0t0d0s3 349525 203456 146069 58% /var
swap 256000 43480 212520 17% /tmp
/dev/dsk/c0t0d0s5 4119256 3684920 393144 91% /opt

Although you can spend countless hours, days and nights monitoring df, looking for
capacities close to 100%, a better method to use is linear estimation to provide an
educated guess as to when the file system will be full. This involves using a spreadsheet to
make a forecast on the basis of data collected each month from df. Imagine if the
following readings had been taken over a period of 11 months:

Month

Capacity
(%)

Jan

Feb

Mar

Apr

16

May

32

Jun

64

Jul

68

Aug

76

Sep

83

Oct

84

Nov

90


Now, with a reading of 90% full, its not clear whether you should take action now to
resolve the space problem, since some early months only saw changes of 1-2%, meaning
that another 5-10 months might be required for the file system to fill up. Alternatively,
some months saw a 32% rise, meaning that only a few days might be left until the file
system has reached capacity. While there is no way to tell the future, by using a linear
prediction model, its possible to predict the capacity of the disk at the end of the next
month. Its also possible to evaluate how well the model fits the previous data by using
the square of the correlation co-efficient (R2).



Figure 1 shows the result of performing a
linear regression on the capacity data to
predict a 99.18% capacity by the end of
the coming month. Future capacity
values can be predicted by the equation
produced from the regression (y=9.7727x
10.924). In addition, with the fit of the
model (R2) equal to 0.9365, more than
93% of the variation in the existing data
can be explained by this equation. This suggests that the model is reliable, and may be
used to predict future capacity values.

What action can an administrator take when a disk volume is approaching capacity? The
following strategies should be investigated:

Using the find command to locate core files and remove them. Core files are memory
dumps that are produced when an application crashes, and contain debugging
information that developers rarely seem to use. Thus, they can usually be deleted (but
check with your developers first).
Use the find command to locate the largest files on the file system. Use the gzip
command to repeat pack and compress them.
Enforce user quotas.
Buy a larger capacity disk and transfer user directories from the original disk.

Use RAID technology (striping) to logically extend the length of the file system.

By using these strategies in combination, disk space can usually be extended in times of
crisis.

Exercise 6C: Monitoring File Systems


Devise a find command to locate core files and remove them.
Devise a find command to list the ten largest files and gzip them.

Theme 6.4: Repairing File Systems


Traditionally, file systems can be repaired by using the fsck command during single user
mode, when file systems have not been mounted. However, since UFS formats can now
have a transactional log, there has been much less use of fsck in recent years. File system
logging allows all transactions to be recorded before they are committed to disk. This
allows these transactions to be written to a disk if the system unexpectedly reboots. Thus,
its very difficult for a disk to contain inconsistent data. Most repairs with todays
systems involve tuning the file system to perform well in a specific environment. While
many parameters can be used during file system creation to optimize performance, these
may need to be changed over the life of the disk.

File system inconsistencies can occur for several reasons, including unexpected power
loss, not synchronizing disks before changing init states, and hardware faults. The fsck
program can be used in single user mode to perform a number of checks, including the
superblock. The superblock is at risk if disk data is not correctly synchronized during
shutdown. The superblock is checked by verifying the size of the file system, counting the
number of inodes, and checking the number of free blocks. If these figures dont tally, then
corrective action must be taken. For example, if the tally of inodes is greater than the
maximum number of inodes on the file system, then clearly theres a problem. One of the
great features of UFS is that multiple copies of the superblock are used, so that if one
corrupt another can be used instead. fsck allows superblocks to be repaired and made
consistent.

The tunefs command tunes a file system for optimal performance. This can either be
performance optimized for speed, or performance optimized for disk space. For systems
that are critically low on space, then an optimization for disk space should be performed.
For systems that are currently sluggish and need to be faster, an optimization for speed
should be performed. The tunefs command takes the following options:

-a r write r blocks before halting a rotation.
-e r only use at the most r contiguous disk blocks for each file.
-d r rotational delay is set to r milliseconds.
-m r requires a reserve of r% free space on the file system
-o space tunes the file system for space.
-o speed tunes the file system for speed.


Exercise 6D: Repairing File Systems


Read the man page for fsck. Summarize the five different phases and their roles.
Devise a command string for tunefs that maximizes speed and sets the rotational
delay to 100 ms.

Assignment 6.1: Disk Volumes


Description
Display a list of disk volumes on your system.

Procedure
1.
2.

Login to your system.


Using the df command, creating a list of mounted volumes.

Assignment 6.2: System Configuration


Description
Print your local system configuration, removing all non-disk devices.

Procedure
1.
2.
3.

Login to your system.


Create a file from the output of the prtconf command.
Remove all the disk entries from the file.

Assignment 6.3: Raw and Block Devices


Description
For one physical disk, list the corresponding raw and physical disk files.

Procedure
1.
2.
3.

Login to your system.


Using the df command, select one mounted file system.
Record its raw and block device file names.

Module 7 - Disks, Backup and Restore


Overview
Before disks can be used on a system, they must be formatted to ensure that no surface
errors exist that would prevent data being read and/or written correctly. The format
command is complex and contains a number of options, including surface analysis, which
are explained in this module. Since the Solaris disk naming is convention is complicated,
it may take some time to relate a disk slice like c0t0d0s3 to [c]ontroller 0, [t]arget 0, [d]isk
0 and [s]lice 3.

Once a disk file system has been created, it needs to be backed up on a regular basis, by
using a full or incremental dump. This ensures that, when (not if) the disk eventually
experiences a media failure, the contents of the disk can be restored easily. In this module,
the standard Solaris backup and restore procedures are covered in depth. Although many
sites use commercial, third-party backup software, Solaris provides a number of tools that
can perform both incremental and full dumps.

Learning Outcomes
Upon successful completion of this course, students will be able to:

State the purpose of the format command and identify how it is used.
Describe the menu selections available for the format command.
Describe how to use the partition option with the format command.
State the role of backup and restore applications.
Describe how to backup a file system to tape.
Describe how to restore a file system from tape.

Path to Complete the Module


For best results, you may wish to follow the course authors suggested path as outlined
below.

1.
2.
3.
4.
5.
6.
7.
8.

Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: Format.
Read Theme 2: Partition.
Read Theme 3: Backup.
Read Theme 4: Restore.
Complete Assignment 7.1: Format.
Complete Assignment 7.2: Ufsdump.
Complete Assignment 7.3: Ufsrestore.

Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.

Oracle Solaris Administration: Common Tasks, Chapter 8


(https://docs.oracle.com/cd/E23824_01/html/821-1451/docinfo.html#scrolltoc)

Theme 7.1: Format


Solaris systems generally store on hard disk drives, although alternative media formats are
also supported. The format command is used to prepare disks for file system creation, and
therefore plays a key role in the creation of file systems. In Microsoft Windows,
formatting a disk only requires a command like format c:, or a couple of clicks in the
Disk Administrator application. However, since SPARC systems were designed to support
large numbers of disks and controllers, the convention for naming disks is more complex,
as is the formatting process.

Solaris disks are named with the convention [c]ontroller, [t]arget, [d]isk and [s]lice, and
are referred to using either their block or raw device name. For example, slice 3 on disk 2
on target 1 on controller 4 would have the raw device name /dev/rdsk/c4t1d2s3, and the
block device name /dev/rdsk/c4t1d2s3. Since the format command operates on entire
disks, the raw disk device /dev/rdsk/c4t1d2 would be passed to the format command to
create slices like c4t1d2s3.

One of the benefits of Solaris is that multiple partitions can be created to store completely
independent file systems. Creating these partitions is a key function of the format
command, although functions are carried out. These functions can be summarized as:

disk: choose a disk to format
type: choose a disk type for formatting
partition: determine a partition table
current: display the current disks parameters
format: format the disk
fdisk: run fdisk
repair: fix faulty sectors
show: display a disk address
label: creates a label on the disk
analyze: perform surface checks
defect : manage a list of disk faults
backup: investigate disk backup levels
verify: print disk labels
save: write new disk partition data
volname: writes the volume name to disk
quit: exit the application

When you start the format program with no disk name parameters, the installed disks on
the system are displayed:

Searching for disksdone


AVAILABLE DISK SELECTIONS:
0. c0t1d0 <ORACLE2.10 cyl 4072 alt 2 hd 14 sec 72>
/iommu@f,e0000000/sbus@f,e0001000/espdma@f,400000/esp@f,800000/
sd@1,0
1. c0t2d0 <ORACLE2.10 cyl 4072 alt 2 hd 14 sec 72>
/iommu@f,e0000000/sbus@f,e0001000/espdma@f,400000/esp@f,800000/
sd@2,0
Specify disk (enter its number):


To operate on a specific disk, simply enter the appropriate disk number. If the disk is new,
it will need to be formatted, in which case the format menu command should be selected:

format> format
Ready to format. Formatting cannot be interrupted
and takes 30 minutes (estimated). Continue?

If the disk has previously been formatted, the following message will be displayed:

[disk formatted]


At this point, all of the operations described above can be performed.

Exercise 7A: Format


Consult the man page for format. Summarize the main functions of format.
Use the format command to display a list of disk devices installed on your system.


Theme 7.2: Partition


While the format command has many options, its key role is to layout the disk according a
specification for disk slices with specific locations and sizes. For example, one disk might
contain a complete bootable system, with partitions containing file system like /, /usr, /etc,
/var and /export. However, another disk might contain only a single slice that hosts the
/opt file system. The decision to use a specific slice for a named file system is always
relative: a file system initially mounted on /opt could be unmounted and remounted on
/data. The format command does not deal with the names of file systems, only their
underlying slices.

When the format command writes information about slices and their physical locations (in
terms of sectors) to disks, a table of contents is created. This contains the absolute location
of disk sectors for each slice on the disk. To display the disk table of contents, the prtvtoc
command is used:

# prtvtoc /dev/dsk/c0t0d0s2
* /dev/dsk/c0t0d0s2 partition map
*
* Dimensions:
* 512 bytes/sector
* 63 sectors/track
* 255 tracks/cylinder
* 16065 sectors/cylinder
* 2040 cylinders
* 2038 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 0 988899 988898 /
1 7 00 988899 1029383 2018281 /var
2 5 00 0 32708340 32708339
3 3 01 2018281 2000000 4018280
6 4 00 4018281 2000000 6018280 /usr
7 8 00 6018280 40000000 46018279 /export/home


Here, we can see that the partition table identifies five user partitions, excluding partition
2, which represents the whole disk.

Exercise 7B: Partition


Using the format command, print a list of defined partitions for one disk. Compare
the results with the output from prtvtoc. Which command provides the most
information?
Read the man page for prtvtoc. Identify any useful options, and use these to display
more detailed information about the volume table of contents.


Theme 7.3: Backup


Performing regular backups protects valuable from data from unintentional erasure caused
by crackers, novice users or hardware failure. A backup provides an off-line mechanism
for storing a set of files extracted from a file system. Traditionally, analog tape has been
the major backup medium for Solaris systems however, newer innovations, such as
Digital Audio Tape (DAT), has become a popular way of reliably storing gigabytes of
data. Other backup devices include writeable CD-ROMs.

The key requirement for backups is rapid restoration times in case a file needs to be
restored. This requirement must be balanced against the need to conserve expensive tapes,
and reduce the amount of time a system (and possibly network) is loaded up with dumping
files to tape. This is why there are two major backup strategies in use full dumps and
incremental dumps. A full dump involves a copy of all files defined in a backup set being
written to a tape or set of tapes. A full dump can take several hours to perform, depending
on the size of the backup and the raw writing speed of the backup device. The major
advantage of a full dump is that it often fits on a single DAT tape, and a file can be quickly
restored from that tape.

Alternatively, an incremental dump is based on minimizing the amount of data written to
backup tapes each day. On a weekly cycle, a full dump is performed on Sunday. Then, for
every file in the backup set that is modified each day until the next Sunday, a fresh copy is
written to a new tape. This means that a number of different tape sets must be maintained.
Potentially, if a file has been changed and the incremental tape is lost, then only an earlier
version may be restored. However, given the large reduction in the time taken to perform
backups each day when an incremental approach is adopted, its unsurprising that this
method is frequently used in industry.

To create a full dump of a file system stored on /dev/rdsk/c0d0t0s5, to the tape device
/dev/rmt, the following command could be used:

# ufsdump 0cu /dev/rmt/0 /dev/rdsk/c0d0t0s5
DUMP: Writing 63 Kilobyte records
DUMP: Date of this level 0 dump: Oracle May 12 12:03:12 2002
DUMP: Date of last level 0 dump: the epoch
DUMP: Dumping /dev/rdsk/c0d0s0 (solaris:/) to /dev/rmt/0.

To perform incremental backups, a slightly different strategy is required dump levels


must be determined in advance, to ensure that the ufsdump program knows when to
perform a full dump (usually once every week). The full dump has a dump level of 0 all
incremental dumps must be numbered greater than 0 in ascending numerical order, up to a
maximum of 9. When a dump is performed that uses 0, then a full dump is performed, and
the incremental dump cycle is restarted. Thus, to perform an incremental dump of
/dev/rdsk/c0d0t0s5 on the Monday after a full dump on Sunday, the following command
would be used:

# ufsdump 1cu /dev/rmt/0 /dev/rdsk/c0d0t0s5
DUMP: Writing 63 Kilobyte records
DUMP: Date of this level 0 dump: Mon May 13 12:01:12 2002
DUMP: Date of last level 0 dump: the epoch
DUMP: Dumping /dev/rdsk/c0d0s0 (solaris:/) to /dev/rmt/0.

Exercise 7C: Backup


Using the ufsdump command, identify the size of a potential backup of a single file
system by passing the S parameter. Verify that your backup tape can hold a file of
this capacity.


Theme 7.4: Restore


When backups are successfully performed, a record is written to the /etc/dumpdates file as
follows:

# cat /etc/dumpdates
/dev/rdsk/c0d0t0s5 0 Oracle May 12 12:03:16 2016
/dev/rdsk/c0d0t0s5 0 Mon May 13 12:01:12 2016


After files have been successfully backed up to disk, they can be easily restored by using
the ufsrestore command. This command does not automatically write the files to the
absolute location from which they were recorded. Instead, they can be written to a
temporary directory (such as /tmp) and compared with the current copies of disk. The
hierarchical structure of the dump is always preserved thus, if files are recorded from
/usr/local, including /usr/local/games and /usr/local/bin, then if the files are restored to
/tmp, then /tmp/local, /tmp/local/games and /tmp/local/bin will all be preserved.

To restore data from the tape drive /dev/rmt, the following command can be used:

# ufsrestore xf /dev/rmt/0
You have not read any volumes yet.
Unless you know which volume your file(s) are on you should start
with the last volume and work towards the first.
Specify next volume #: 1
set owner/mode for .? [yn] y


This will extract all of the volumes from the first volume recorded on the tape. It is
possible to record multiple volumes on a tape, but to avoid the risk of accidental
overwriting, it is suggested that a separate tape be used for each volume backed up.

If you have a backup volume, but youre not sure what files are located on the tape, then
the following command can be used to displayed a table of contents:

# ufsrestore tf /dev/rmt/0
74333 ./local/bin
34341 ./local/games

108674 ./local


The following commands are supported by ufsrestore when executed in interactive mode
from the command line:

ls: display directory contents
cd: change absolute or relative directory
pwd: display current working directory
add: adds a file to a list of files to be retrieved
delete: removes a file from a list of files to be retrieved
extract: retrieves listed files
setmodes: sets permissions on retrieved files
quit: quits ufsrestore
what: prints tape header information
verbose: switches to verbose mode
help: displays the ufsrestore help screen

Exercise 7D: Restore


Using ufsrestore, display a file listing for a full dump and incremental dump.
Read the man page for ufsrestore, and make a list of any options that can be used .


Assignment 7.1: Format


Description
Use the format command to display a list of installed disks.

Procedure
1. Login to your system as root.
2. Execute the format command.
3. Display a listed of installed disks.

Assignment 7.2: Ufsdump


Description
Use the ufsdump command to backup the /etc to a tape device.

Procedure
1. Login to your system as root.
2. Identify the name of your tape drive device.
3. Use the ufsdump command to backup the /etc to the tape device.

Assignment 7.3: Ufsrestore


Description
Use the ufsrestore command to restore the /etc from a tape device to the /tmp directory .

Procedure
1.
2.
3.
4.

Login to your system as root.


Identify the name of your tape drive device.
Use the ufsrestore command to restore the /etc files to the /tmp directory.
Cross-check the /etc file listing with /tmp to ensure that all files have been
restored correctly.

Module 8 - Basic Commands, Editors and Remote Access


Overview
Since much of the operation of a Solaris system involves command-line administration,
its important to become competent with using the shell and the various utilities that can
be used with pipelines and other logical operators. Students will learn the bulk of these
commands and shell logic in this module, although some aspects will have been covered
in previous chapters. Basic commands to create, delete or update files will be given.
Special emphasis will be placed on editing new and existing text files by using the visual
editor (vi).

Remote access to a Solaris system allows multiple users to login concurrently, spawn
separate shells, and execute different jobs. After mastering all of the topics covered in this
course, these skills can finally be applied to solving real world problems by allowing other
users to login to a system, and provide services. This module covers the basic aspects of
TCP/IP networking required to manage and support remote services, and discusses some
of the key security issues associated with providing remote access. We also cover the
configuration of local and remote printing services.

Learning Outcomes
Upon successful completion of this course, students will be able to:

Describe how to navigate through a file system using standard shell commands.
State how to use wildcards to relate groups of files.
List the commands used to print directory entries and their file types.
Describe how to create or delete directories.
List the commands required to copy, create, move, or remove files.
Describe how to edit files using the vi editor.
List basic vi commands.
State how to search and replace strings using vi.
Describe how to remotely access a Solaris system
List the commands used in FTP to transfer files between hosts.

Path to Complete the Module


For best results, you may wish to follow the course authors suggested path as outlined
below.

1.
2.
3.
4.
5.

Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: Basic Commands.
Read Theme 2: Editor.
Read Theme 3: Remote Access.
Complete Major Assignment

Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.

Oracle Solaris Administration: Common Tasks, Chapter 14


(https://docs.oracle.com/cd/E23824_01/html/821-1451/docinfo.html#scrolltoc)

Theme 8.1: Basic Commands


In Solaris, user commands are executed through a shell, such as the Bourne shell (/bin/sh),
C shell (/bin/csh) or Bourne Again shell (/bin/bash). These shells allow applications to be
executed that ultimately call system routines or libraries to communicate with the kernel.
Applications must interact with the kernel in order to communicate with devices such as
consoles, keyboards, disk drives and tape devices. All users on a Solaris system have a
default shell that they can use in this way. Users can also change their shell since
different shells offer different functions, a user might start with Bourne shell and switch to
C shell, for example, if they require more programming.

The Bourne shell is the most basic shell available. It has the following features:

Allows arguments to be passed to applications
Allows standard input and output to be piped
Contains basic logical decision support (e.g., if/then/else and while constructs)
Guaranteed to be available on all UNIX systems
Permits data to be redirected between applications (or files) for overwriting or
appending
Permits multiple commands to be executed from a single semi-colon delimited
statement
Supports command iteration through the for statement
Supports environment variables
Used to execute most system scripts by convention

However, the Bourne shell also has a number of disadvantages:

Difficult to program
Does not have command history
Lacks modern terminal handling features

The Bourne shell has the following commands built-in:

.: executes a script contained in a file.
bg: switches a process from being suspend to executing in the background.
break: exits a loop.
cd: switch to a different working directory.
continue: continues a loop.

echo: displays the value of an environment variable or string.


eval: evaluates a user-defined declaration.

exec: executes a command.


exit: exits the shell.
export: sets an environment variable for the life of the shell.
fg: stops background execution of a job and brings it into the foreground.
jobs: prints a list of background jobs currently running.
kill: terminates or sends a signal to a process.
newgrp: changes the users current working group ID.
pwd: displays the current working directory.
read: extracts data from standard input.
return: defines a shell functions return value.
set: sets an environment variable with limited scope.
times: prints a summary of system usage.
ulimit: enforces a resource limit on the system.
umask: defines a default permission to be assigned to all new files created by the
user.
unset: removes the definition of an environment variable from memory.

Exercise 8A: Basic Commands


Consult the man page for the C shell (csh). Summarize any differences from the
features of the Bourne shell.
Write a Bourne shell script that takes input from the ls command and sorts it reverse
alphabetically using the sort command.
Write a Bourne shell script that takes input from the ls command and sorts it
alphabetically using the sort command.


Theme 8.2: Editor


Editors are text processing tools that perform the following basic functions:

Allow new text files to be created
Allow the contents of existing files to be modified
Support the searching of a file for a specific text string
Support the searching of a file for a specific text string and its replacement with
another text string
Copying of a text string from one location in the file to another
Insertion of a copied text string to any location in the file
Deletion of a text string from one location in the file

The Visual Editor (vi) is found on all UNIX systems. vi works by reading an existing file
into a memory buffer, into which changes intended for the file are made. When the
changes have been completed, the buffer is written to the disk file from which it was read.
vi operates in two modes: control mode and edit mode. In control mode, command can be
issued to the editor, while in edit mode, data is inserted directly into the buffer. Unlike
other editors, vi requires you to navigate through the file (using the arrow keys on the
keyboard) during control mode only if you press the arrow keys during edit mode,
strange results are assured.

The following keystrokes are commonly used in command mode:

/ - performs a forward search for a text string
? performs a backwards search for a text string
: - runs an ex editor command on the current line
! executes a shell within vi
ZZ saves a file
h moves the cursor left
j moves the cursor down
k moves the cursor up
l moves the cursor right
nG moves cursor to line n
w moves to next word
b moves back one line
dw delete word
ndw delete n words
d^ - deletes all words to the beginning of the line
dd deletes the current line
dG deletes all lines to the end of file
D - deletes all words to the end of the line

x deletes the current character


nx deletes the n characters to the right
nY yanks n lines into the buffer
p pastes to the right of the cursor
P pastes to the left of the cursor

At the individual line level, ex commands can be entered by preceding the commands with
a colon:

:n - moves cursor to line n
:$ - moves cursor to the end of the file
:%s/a/b/g replaces all occurrences of string a with string b
:wq saves modified file and quits
:q! quits without saving any changes
:set sets a number of different options.

Exercise 8B: Editor


Make a copy of /etc/group. Using vi, search for every occurrence of : and replace
it with -.
Make a copy of /etc/passwd. Using vi, go to the 10th line, and yank every other line
until the end of the file.
Make a copy of /etc/ftpusers. Using vi, yank the entire file and copy it into the file at
the beginning of the file.

Theme 8.3: Remote Access


Since Solaris is a multi-user operating system, and SPARC systems have only one
console, some kind of remote access facilities must be supplied, to allow users to login
and spawn shells. Traditionally, the most common forms of remote access have been
Telnet and File Transport Protocol (FTP) clients. These allow users to connect through the
Internet, or a local area network, and either spawn interactive shells or transfer files
respectively.

Telnet is derived from the original DARPA Telnet protocol, and the Solaris Telnet server
(running through inetd) supports connections from any host that supports TCP/IP and
which has a Telnet client. Thus, users running Linux, Windows, MacOS or other forms of
UNIX can easily login to a Solaris system by using Telnet. Telnet supports a number of
terminal emulations, including VT-100 and VT-220. In an X11 graphics environment, it
also allows users to remotely login from client systems to servers, enabling server-side
execution of applications that appear on the clients console. This is one of the key
benefits of centralizing CPU power and storage capacity in a single system, since users
can effectively make the best use of shared resources.

Telnet allows a number of useful tests to be performed, since it can be used to make a
connection to any TCP port. Thus, if you need to check that a mail server or web server is
accepting connections, it is possible to connect directly to the TCP port concerned and
enter commands. For example, to test a web server operating on port 80, the following
command could be used to test if it is operational:

$ telnet dalek
Trying 192.168.204.32
Connected to dalek.
Escape character is ^].


At this point, a valid HTTP command sequence can be entered, and if the server is
working, then the appropriate data should be returned:

GET index.html
<!DOCTYPE HTML PUBLIC -//IETF//DTD HTML 2.0//EN>
<HTML><HEAD>
<TITLE>Dalek Index Page</TITLE></HEAD>
<h1>This is the dalek operations server</h1>
.


The ftp command is similar to the telnet command, in that they are both TCP clients. The
ftp command creates a connection to a remote FTP server, allowing files to be transferred
as required. The following output shows a sample FTP session:

$ ftp dalek
Connected to dalek.
220 server FTP server (SunOS 5.11) ready.
Name (dalek:davros): davros
331 Password required for davros.
Password:
230 User davros logged in.
ftp>


At this point, users can issue GET or PUT commands in ASCII or BINARY format to
transfer files from and to the server, in ASCII or binary format respectively.

Both Telnet and FTP have been identified as security risks in recent years, since a user
must enter their username and password in order to authenticate themselves. The problem
here is that the username and password are sent in the clear, and can be intercepted by
any other system whose network interface is operating in promiscuous mode. This
provides crackers with the ability to intercept usernames and passwords and use them to
break into your system.

One solution to the Telnet and FTP security problems is to use Secure Shell (SSH) and the
Secure Copy (SCP) programs, since SSH and SCP provide sophisticated mechanisms for
the secure exchange of authentication tokens like usernames and passwords. In this case,
an interactive login can be obtained by using the SSH program, while transferring files can
be achieved by using the SCP program. Both applications make use of cryptography to
effectively hide any data transferred from client to server and server to client. Although
the packets can be intercepted by a third party, their contents will be meaningless, unless
the interceptor has obtained the private key of the user. In addition, a session key is
required to decrypt the data from an individual session. This combination makes it very
unlikely that a cracker would be able to decode data transmitted across a secure link.

Exercise 8C: Remote Access


Use the telnet command to query a FTP server running on port 21.
Use the telnet command to query a web server running on port 80.
Download and install SSH from www.openssh.org. Use the snoop tool to verify that
packets are encrypted and that their contents cannot be interpreted.


Assignment 8.1: Major Assignment


Description
Develop a menu-based shell script application that allows all of the administrative
functions covered in this course from a simple interface. This application could require
error codes to be interpreted, file paths to be checked prior to execution etc. Write a report
about the application.

Procedure
1. Define the functions that the application will perform (e.g., adding a new
user).
2. Design the application.
3. Create the scripts.
4. Test the scripts.
5. Write a report about the application describing its design, implementation and
testing.

Module 9 - Clients and Servers


Overview
UNIX systems have come to represent a large part of the enterprise system market, since
UNIX systems feature a multi-process, multi-threaded and multi-user operational model.
This scalability extends to networks and computing clusters, where multiple systems work
together to provide wide area, highly available services and application for e-commerce,
e-business and vertical industries. Many of these services are built using Oracle
Microsystems Java 2 Enterprise Edition (J2EE) development and deployment
environment, and while Java implements a write once run anywhere architecture, the
choice of hardware platform is critical. This is why many organizations have chosen
Oracles Solaris operating environment and SunOS operating system as their UNIX
platform of choice. Tight integration between key network and distributed services, such
as the Network File System (NFS) and the Network Information Service (NIS) make
Solaris a natural choice for developing highly available systems.

When deploying applications within the enterprise, security is a key concern, since
multiple users may need to be authenticated across systems for access to resources. Solaris
supports key authorization systems like Kerberos to ensure that single sign-on and
simplified authentication procedures, from the users perspective, are coupled with secure
back-end services, like running a Virtual Private Network (VPN) using IPSec. Solaris
provides a wide variety of tools to assist in the implementation and maintenance of
security procedures.

The key architecture implemented by Solaris systems to deploy services is a client/server
model, where a centralized server offers services which are utilized by clients. The
client/server architecture is pervasive in the UNIX and Solaris world, from hardware to
software. For example, at the software level, the Telnet application allows users on client
systems to connect to a remote server system and execute commands through a shell. At
the hardware level, Oracle sell diskless Oracle Ray clients that require a server to boot
from and store all of their files on. Since a server provides an effective means of sharing
CPU power and mass storage, Solaris has favored this approach over the fat client
architecture used by PCs. In this module, students will learn the basic principles of the
client/server architecture and how they are implemented in Solaris.

Learning Outcomes
Upon successful completion of this course, students will be able to:

Describe the server types implemented in Solaris.
Describe the client types implemented in Solaris.
State the steps required to install a Solaris server.

Path to Complete the Module


For best results, you may wish to follow the course authors suggested path as outlined
below.

1.
2.
3.
4.

Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: Servers.
Read Theme 2: Clients.
Read Theme 3: Server Installation.

Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.

Oracle Solaris Administration: Common Tasks, Chapter 16


(https://docs.oracle.com/cd/E23824_01/html/821-1451/docinfo.html#scrolltoc)

Theme 9.1: Servers


The term server can refer to two distinct entities, which are often confused by novices:

A hardware server can refer to a computer system which supports so-called
thin clients by providing file-sharing and collective CPU utilization, of a
typically multi-process system.
A software server is any program that accepts connection and/or requests
from clients and processes them.

Generally, hardware servers host software servers, which is why their combination
is referred to generically as a server. However, this is not strictly a 1:1
relationship, since a hardware server often hosts many different software servers.
For example, a Solaris hardware server may support the following software servers:

Domain Name Service (DNS) server
Network Information Service (NIS) server
Network Information Service+ (NIS+) server
Lightweight Directory Access Protocol (LDAP) server
Samba server
Network File System (NFS) server
Print server
Secure Shell (SSH) server

These are just a few examples of the potentially hundreds of servers that a server
can run concurrently. Indeed, one of the key goals of service centralization is to
minimize administrative overhead by having an administrator manage the smallest
number of hardware servers possible.

Another confusing aspect of client/server architecture is the fact that client systems
can also run servers. For example, most client systems will need to run a SSH
server to allow remote administration from a server. Thus, its not always possible
to cleanly associate software servers with hardware servers.

In the following discussion, well refer to software servers simply as servers. The
basis of all client/server applications in Solaris and other systems which support
TCP/IP networking is the socket. A socket is a special file descriptor which enables
network communications between a client and a server. A socket can be created by
executing the following system call in C:


int socket(int domain, int type, int protocol);


where domain is generally PF_INET (for supporting IP), and the socket type can be
a stream (SOCK_STREAM) for TCP, datagram (SOCK_DGRAM) for UDP, or raw
(SOCK_RAW) for use only by the super-user. The protocol number for the TCP/IP
family is 0, so a stream TCP socket can be created by using the following call:

int socket(PF_INET, SOCK_STREAM, 0);


Both a server and client application require a socket to be created in order to
communicate with each other. When a server starts up, it begins by listening on a
specific port number for client requests. For example, the sendmail server listens for
TCP connections on port 25. The ports on which servers listen is mapped by the
services database stored in /etc/services. The following entries show a set of standard
service entries:

ftp 21/tcp
telnet 23/tcp
smtp 25/tcp
whois 43/tcp
domain 53/tcp
domain 53/udp
tftp 69/udp
finger 79/tcp


To locate a server on the network, the client uses the gethostbyname() or
gethostbyaddr() system call to retrieve IP address information for the server. Once the
client can find the server, it can retrieve the port number for a specific service by using the
following call:

getservbyname(service, tcp)


Here, service is the service name to be requested, and tcp is the protocol. Once a
connection has been established between a client and server, with a specific service
request, then requests can be made and responses dispatched as per the protocol
concerned. This generally involves sending a string from the client which is
extracted from standard input on the server. The status of socket connections can be

examined by using the netstat command:



# netstat a
TCP
Local Address Remote Address Swind Send-Q Rwind Recv-Q State

*.* *.* 0 0 0 0 IDLE
*.Oraclerpc *.* 0 0 0 0 LISTEN
*.* *.* 0 0 0 0 IDLE
*.ftp *.* 0 0 0 0 LISTEN
*.telnet *.* 0 0 0 0 LISTEN
*.shell *.* 0 0 0 0 LISTEN
*.login *.* 0 0 0 0 LISTEN
*.lockd *.* 0 0 0 0 LISTEN


A clearer relationship between port numbers, server processes and client
connections can be observed by

$ ps -eaf | grep nfsd
root 629 1 0 Feb 27 ? 0:11 /usr/lib/nfs/nfsd -a 16

$ netstat -a | grep nfsd
TCP
Local Address Remote Address Swind Send-Q Rwind Recv-Q State

*.nfsd *.* 0 0 0 0 LISTEN


Here, the NFS server daemon (nfsd) can be seen in both the process list and the
socket list. All servers should have an entry in the process list, unless they have
been spawned by the Internet daemon (inetd), and they should also have a
corresponding socket entry when listening for connections.


Exercise 9A: Servers


Read the man page for the netstat command. Make a list of all the available and
summarize their functions.
Read the man page for the getservbyname system call and make a list of the
protocols that it supports.

Theme 9.2: Clients


Clients are programs that connect to servers to perform a specified action or set of actions.
For example, a mail client such as the elm program connects to a server running a Mail
Transfer Agent (MTA). Client programs often provide a user interface to an information
service or some other kind of server. For example, Usenet news is carried by news servers
around the world. Users can access the resource by using a client system that connects to a
server to permit the reading of news articles and the posting of new news articles.

Lets look at a concrete example of how clients work in Solaris:



client:12:00:joe> telnet server 25
Trying 192.68.34.22
Connected to server.paulwatters.com.
Escape character is ^].
220 server.paulwatters.com ESMTP Sendmail 8.8.8/8.8.8; Mon, 3 Jun 2002 12:00:05 +1000 (EST)
HELO client
250-server.paulwatters.com Hello client.paulwatters.com [192.68.34.25], pleased to meet you
MAIL FROM: <joe@client.paulwatters.com>
250 <joe@client.paulwatters.com> Client ok
RCPT TO: <ernie@server.paulwatters.com>
250 <ernie@server.paulwatters.com> Recipient ok
DATA
354 Enter mail, end with . on a line by itself
Testing
.
250 MAA56574 Message accepted for delivery
QUIT
221 server.paulwatters.com closing connection
Connection closed by foreign host.


The bolded commands are typed by the user joe, who has initiated a client session,
connecting from the client system to send a message to ernie at the remote system server,
both within the domain paulwatters.com. Note that the telnet command can be used to act
as a client here no special client system is required, although many offer better editing
facilities and automation of the mail exchange process. The standard SMTP commands
HELO, MAIL, RCPT, DATA, ., and QUIT are used to communicate the
message data and meta-data to the MTA running on server. After each request is sent by

the client, the server responds with a specific response code, such as 220, 250, 354
and 221. These can be parsed by the client program when it receives a response.

Module 10: Solaris Network Environment


Overview
Oracles often quoted motto for Solaris is The Network Is The Computer. This
statement defines the network focus of the Solaris operating environment and the SunOS
operating system. Most operations on Solaris are supported in a networked environment,
and its important to keep this in mind while learning about the management and
administration of individual systems. For example, while single hosts store passwords in a
single file, using Oracles Network Information Service (NIS/NIS+) allows user
passwords for an entire network to be stored centrally in a map or table. Thus, when a user
is authenticated on any system in the network, the same centralized record is read from the
table or map. This greatly simplifies network administration, and demonstrates the scope
of Solaris networking.

The goal of this module is for students to introduce Solaris networking within a
client/server context. After working through the module, students should be able to
identify the key functions of each layer within the Open Systems Interconnect (OSI)
networking model, which has seven layers. In addition, students will learn the functions of
each layer of the five-layer TCP/IP networking model, and should be able to state the key
features and core functions of ethernet. Finally, students will learn a set of commands that
display critical information about the state of the local network and network interfaces.

Learning Outcomes
Upon successful completion of this course, students will be able to:

Describe the role of the levels in the OSI stack.
Describe the purpose of the levels in the TCP/IP stack.
List the properties of ethernet networking.
State the commands used to monitor network interface status.

Path to Complete the Module


For best results, you may wish to follow the course authors suggested path as outlined
below.

1.
2.
3.
4.
5.
6.
7.

Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: OSI Stack.
Read Theme 2: TCP/IP Stack.
Read Theme 3: Network Interfaces.
Complete Assignment 10.1: Client/Server Benefits.
Complete Assignment 10.2: RPC Services.
Complete Assignment 10.3: Starting/Stopping Services.

Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.

Oracle Solaris Administration: Common Tasks, Chapter 17


(https://docs.oracle.com/cd/E23824_01/html/821-1451/docinfo.html#scrolltoc)

Theme 10.1: OSI Stack


The Open Systems Interconnect Network Model (OSI) is an abstract model of networking
that separates all network operations, from hardware to software, into seven layers. The
OSI model is managed by the International Standards Organization (ISO). The OSI model
is shown in Figure 1. Layer 1 is the Physical Layer, which comprises the physical
implementation of data transfer on the fiber. Layer 2 is the Data Link Layer, which is the
lowest control level in the model which manages data transmission. Layer 3 is the
Network Layer, which provides the infrastructure required to administer connections.
Layer 4 is the Transport Layer, which realizes actual network protocols in terms of
guaranteed packet delivery (or otherwise). Layer 5 is the Session Layer which provides an
interface between individual applications and the underlying transport on a per application
basis. Layer 6 is the Presentation Layer, which presents data to applications in a uniform
manner. Finally Layer 7 defines the Application Layer, where applications make calls to
lower layers, which in turn call each other to implement network operations.


7. Application Layer


6. Presentation Layer


5. Session Layer


4. Transport Layer


3. Network Layer


2. Data Link Layer

1. Physical Layer

Exercise 10A: OSI Stack


Summarize the key properties of each layer in the OSI stack.

Theme 10.2: TCP/IP Stack


The TCP/IP suite of protocols was developed by the U.S. Department of Defense as a
result of the work of the Advanced Research Projects Agency (DARPA) into reliable
networking. The TCP/IP stack has many similarities to the OSI stack, and the layers in
TCP/IP map naturally onto some layers on the OSI stack. For example, both TCP/IP and
OSI have an Application Layer. Figure 2 shows all of the layers in the TCP/IP stack. Level
1 is the Network Layer, and encapsulates all hardware-related activities. This layer utilizes
the 48-bit Media Access Control (MAC) address of each network interface to identify
each host on the network. This number is usually stored in the memory of the network
interface card, although it may be possible to modify its value in software. When two
systems communicate with each other using the Internet Protocol (IP), the packets are
always transferred by identifying MAC addresses. These can be translated from the more
common 32-byte IP address that applications will generally use, by utilizing the Address
Resolution Protocol (ARP). On local area networks, ARP works by broadcasting a request
from a source host for a destination hosts address. When the destination host receives the
request, it can respond with its address. The following example shows the MAC addresses
retrieved after a broadcast by using the arp command on a local network:

# arp -a

Net to Media Table: IPv4
Device IP Address Mask Flags Phys Addr

hme0 austin 255.255.255.255 00:50:ba:13:08:18
hme0 felicity 255.255.255.255 00:50:ba:78:40:03
hme0 ivana 255.255.255.255 SP 08:00:20:c6:a5:72


An integral part of the Network Layer is the choice of transmission media. Most modern
networks are composed of ethernet capable of transmitting 10Mbps (10BASE-T) or
100Mbps (100BASE-T). However, more modern networks run ethernet at speeds of
1Gbps (1000BASE-FX)or even 10Gbps. The great advantage of ethernet over earlier
network media is its ability to effectively share a single media for network transmission
between multiple hosts, since it is based on a bus architecture. Ethernet features an
advanced protocol to detect and minimize packet collisions between devices that wish to
transmit data concurrently. Obviously, as network bandwidth and speed increases, the
potential for collisions also grows. Other networking technologies employed at the
Network Layer include the Fiber Distributed Data Interface (FDDI), which is an optic
fiber implementation of a token ring. This technology ensures that there are no collisions,
but available bandwidth does not yet meet ethernet. An alternative architecture is provided
by Asynchronous Transfer Mode (ATM) networks, which are connection-oriented and

suitable for systems which are always on and which must have guaranteed quality of
service, such as video conferencing.

Level 2 of the TCP/IP stack is the Internet Layer. This layer implements the low-level
Internet Protocol (IP) that transport protocols in Level 2 (the Transport Layer) rely on to
manage routing and packet assembly and disassembly. Being one level above the Network
Layer, IP uses IP addresses to identify hosts on the network. These IP addresses can be
mapped 1:1 to MAC addresses. While ARP works only with local area networks, IP
allows data to be exchanged between hosts on different networks. IP networks are divided
into three distinct classes for the purposes of defining sub-networks, or subnets, each
with its own mask, known as the netmask. Three classes of network are supported:
Class A (netmask 255.0.0.0), Class B (netmask 255.255.0.0) and Class C (netmask
255.255.255.0). Within subnets, IP addresses can be allocated manually to hosts, or
dynamically by using the Dynamic Host Configuration Protocol (DHCP). DHCP acts to
conserve the pool of available IP addresses within a subnet by only allowing clients to
lease an address for a certain period of time. When that period expires, if the host is
down, its lease expires and its IP address can be re-allocated to another host.

IP routing allows packets sent from one host to another to reach their destination, since
there may be many intermediate hosts between a client and server. For example, on a local
network, only a single hop is required to transmit packets between a client and server, as
the traceroute command shows:

# traceroute austin
traceroute to austin (10.64.18.1), 30 hops max, 40 byte packets
1 austin (10.64.18.1) 0.675 ms 0.392 ms 0.305 ms


However, to transmit packets across the Internet, many hosts may pass a packet along
from the client until it reaches its server. For example, to connect from a client in Sydney,
Australia, to Oracle Microsystems webserver in the U.S., more hops will be required:

$ traceroute www.Oracle.com
Tracing route to wwwwseast.usec.Oracle.com [192.9.49.30]
over a maximum of 30 hops:
1 184 ms 142 ms 186 ms 202.10.4.131
2 147 ms 288 ms 186 ms 202.10.4.129

3 483 ms 489 ms 484 ms corerouter2.SanFrancisco.cw.net [204.70.9.132]


4 557 ms 552 ms 561 ms xcore3.Boston.cw.net [204.70.133.81]
5 566 ms 572 ms 554 ms Oracle-micro-system.Boston.cw.net [204.70.179.102]
6 577 ms 574 ms 558 ms wwwwseast.usec.Oracle.com [192.9.49.30]
Trace complete.

Here, we can see that six hops are required to pass a packet from client to server. A second
protocol is supported by the Internet Layer the Internet Control Message Protocol
(ICMP). ICMP allows error messages to be propagated, and allows for higher level
management such as the prevention of congestion. ICMP logically sits on top of IP.

The Transport Layer is Level 2 in the TCP/IP stack. This layer encapsulates all of the
transport protocols, including the Transmission Control Protocol (TCP) and the User
Datagram Protocol (UDP). The former is a connection-oriented protocol that guarantees
packets will be delivered in a specific sequence, while UDP makes few guarantees but has
less overhead.

Level 1 is the Application Layer, which supports most of the commonly used protocols
such as Telnet, FTP, HTTP, NFS and SMTP. Most application developers and end-user
work with protocols that are encapsulated by the Application Layer.

All of the layers are exposed when performing operations like packet sniffing. The
following example shows the data exchanged per-layer for a single packet:

# snoop -v tcp port 23
Using device /dev/hme0 (promiscuous mode)
ETHER: Ether Header
ETHER:
ETHER: Packet 1 arrived at 14:13:22.14
ETHER: Packet size = 60 bytes
ETHER: Destination = 1:58:4:16:8a:34,
ETHER: Source = 2:60:5:12:6b:35, Oracle
ETHER: Ethertype = 0800 (IP)

ETHER:
IP: IP Header
IP:
IP: Version = 4
IP: Header length = 20 bytes
IP: Type of service = 0x00
IP: xxx. . = 0 (precedence)
IP: 0 . = normal delay
IP: . 0 = normal throughput
IP: . .0.. = normal reliability
IP: Total length = 40 bytes
IP: Identification = 46864
IP: Flags = 0x4
IP: .1.. . = do not fragment
IP: ..0. . = last fragment
IP: Fragment offset = 0 bytes
IP: Time to live = 255 seconds/hops
IP: Protocol = 6 (TCP)
IP: Header checksum = 11a9
IP: Source address = 64.23.168.76, moppet.paulwatters.com
IP: Destination address = 64.23.168.48, miki.paulwatters.com
IP: No options
IP:
TCP: TCP Header
TCP:
TCP: Source port = 62421
TCP: Destination port = 23 (TELNET)

TCP: Sequence number = 796159562


TCP: Acknowledgement number = 105859685
TCP: Data offset = 20 bytes
TCP: Flags = 0x10
TCP: ..0. . = No urgent pointer
TCP: 1 . = Acknowledgement
TCP: . 0 = No push
TCP: . .0.. = No reset
TCP: . ..0. = No Syn
TCP: . 0 = No Fin
TCP: Window = 8760
TCP: Checksum = 0x8f8f
TCP: Urgent pointer = 0
TCP: No options
TCP:
TELNET: TELNET:
TELNET:
TELNET:
TELNET:


Here, we can see the entries for TELNET (level 1), TCP (level 2), IP (level 3) and ETHER
(level 4).


4. Application Layer


3. Transport Layer



2. Internet Layer


1. Network Layer


Exercise 10B: TCP/IP Stack


Summarize the key properties of each layer in the TCP/IP stack.
Read the man page for the arp command. Summarize each of the different options
available.
Read the man page for the snoop command. Summarize each of the different options
available.

Theme 10.3: Network Interfaces


All SPARC systems are installed with at least one network interface. This interface is
sufficient for a system to act as a client or server system. However, to act as a router or a
firewall, two network interfaces are required. Some high-end SPARC systems are supplied
with quad-ethernet cards, that allow Demilitarized Zones (DMZs) to be created, in order to
protect local networks from external attacks.

Generally, network interfaces are initialized during the system boot. However, it is also
possible to manually configure a network interface by using the ifconfig command. The
ifconfig command takes many different options and is used to perform operations directly
on the network interface. For example, to display the current status of the interface, you
simply pass the name of the interface on the command-line as shown:

# ifconfig hme0
hme0:flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index2
inet 10.64.18.3 netmask ffffff00 broadcast 10.64.18.255
ether 8:0:20:c6:a5:72

This output shows all of the parameters for the network interface as configured. The
interface is up, meaning that is accepting connections. To display a list of the device
stream, showing the different layers, the following command can be used:

# ifconfig hme0 modlist
0 arp
1 ip
2 hme


If a network interface has not been logically configured to work with the system, it can be
manually plumbed by using the following command:

# ifconfig hme0 plumb


To remove the logical configuration, the interface can be unplumbed:

# ifconfig hme0 unplumb

To bring down an interface, the following command can be used:



# ifconfig hme0 down


This will prevent connections from being accepted. The configuration will be reported as
follows:

# ifconfig hme0
hme0:flags=1000843<DOWN,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500
index 2
inet 10.64.18.3 netmask ffffff00 broadcast 10.64.18.255
ether 8:0:20:c6:a5:72

Of course, you should never unplumb an interface from a remote terminal! To bring the
interface back up, the following command could be used:

# ifconfig hme0 up

Exercise 10C:Network Interfaces


Read the man page for the netstat command. Summarize each of the different
options available.
Read the man page for the ifconfig command. Summarize each of the different
options available.

Assignment 10.1: Client/Server Benefits


Description
After reading all of the reading assignments for the first module, write a 1,000 word paper
summarizing the key benefits of using Solaris in a client/server environment.

Procedure
1. Review the reading assignments for the first module.
2. Read the latest material on Solaris server benefits from the Oracle home page
(www.Oracle.com).
3. List the key benefits of Solaris in a client/server context.
4. Write a 1,000 word paper summarizing these benefits.

Assignment 10.2: RPC Services


Description
Use the appropriate command to display a list of all RPC services running on a Solaris
host. Save the printout from the session.

Procedure
1.
2.
3.
4.

Login to your system as root.


Read the man page for rpcinfo.
Execute the rpcinfo command to obtain a list of all current RPC services.
Save the printout from the session.

Module 11 - System Logging and Auditing


Overview
In any enterprise system, being able to track and isolate different aspects of user and
system operations is critical to maintaining system integrity. For example, in a security
context, its important to be able to record and trace the activity of Internet daemons that
receive requests from external clients and respond to these requests. This is because
external clients might be involved in a Denial of Service (DoS) attack on the system,
preventing legitimate clients from making connections. Alternatively, external clients
could be mounting an intrusion attempt. Thus, one of the primary roles played by the
Solaris system logging (syslog) facility is intrusion detection.

The syslog facility provides a centralized system for recording a wide variety of system
events in a configurable format. Since the output from syslog can be stored in a single file
or a number of files in a standard format, utilities can be written that extract useful
information in a variety of ways. For example, a filter could be defined to extract alerts
raised on a particular day, notifying the super-user of any issues which might require
examination. In this module, you will learn how to work with the syslog facility to identify
key classes of events and create filters to reduce the administrative overhead involved in
recording potentially hundreds or thousands of daily events.

Learning Outcomes
Upon successful completion of this course, students will be able to:

Describe the different functions of syslog.
Describe the syntax of the syslog.conf configuration file.
Interpret a syslog file containing different classes and event types.
Create syslog entries from the command-line.

Path to Complete the Module


For best results, you may wish to follow the course authors suggested path as outlined
below.

1.
2.
3.
4.
5.
6.
7.

Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: Syslog daemon.
Read Theme 2: Syslog configuration.
Read Theme 3: Using syslog.
Read Theme 4: Syslog and the command-line.
Complete Assignment 11.1: Monitoring syslog.
Complete Assignment 11.2: Modifying syslog.conf.

Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.

Oracle Solaris Administration: Common Tasks, Chapter 15


(https://docs.oracle.com/cd/E23824_01/html/821-1451/docinfo.html#scrolltoc)

Theme 11.1: Syslog daemon


The syslog daemon (syslogd) is the program responsible for receiving logging
requests and writing them to the appropriate log file, log device or user specified in
the /etc/syslog.conf file. All messages are written sequentially according to a
timestamp recorded at the time of each event. Each entry consists of a single line of
text, which contains the timestamp, a priority number, and a message. Priority
numbers are ordered from 0 through 7 in order of seriousness:

EMERG emergency (unstable system)
ALERT escalating crisis
CRIT critical error
ERR non-critical error
WARNING error warning
NOTICE normal system entry
INFO daemon information
DEBUG debugging information

One of the nice features of syslog is that some levels can be ignored, such as
NOTICE, INFO and DEBUG, as these dont contain any critical or emergency
notifications. In addition, entries for different priority events can be channeled into
different files. This allows administrators to interactively monitor different classes
of emergency or critical events. For example, if all emergency events are directed
into the file /var/adm/messages.emerg, then one administrator could continuously
monitor new entries by using the following command:

# tail f /var/adm/messages.emerg


Alternatively, if all emergency events are directed into the file
/var/adm/messages.alert, then another administrator could continuously monitor
new entries by using the following command:

# tail f /var/adm/messages.alert


Priorities are associated with messages created by different facilities which are
identified by a different set of codes:

AUTH authentication messages
CRON scheduling daemon messages

DAEMON daemon messages


KERN kernel messages
LOCAL0, LOCAL1, LOCAL7 customizable, facility-defined
messages
LPR printer daemon messages
MAIL mail messages
NEWS Usenet news daemon messages
SYSLOG syslog daemon messages
USER user messages
UUCP UNIX-to-UNIX copy program messages


Lets example a sample segment from the default logfile /var/adm/messages:

$ cat /var/adm/messages
Apr 17 20:34:37 ivana genunix: [ID 540533 kern.notice] SunOS Release 5.11 Version Generic 64-bit
Apr 17 20:34:37 ivana genunix: [ID 784649 kern.notice] Copyright 1983-2000 Oracle Microsystems, Inc. All
rights reserved.
Apr 17 20:34:37 ivana genunix: [ID 678236 kern.info] Ethernet address = 8:0:20:c6:a5:72
Apr 17 20:34:37 ivana unix: [ID 389951 kern.info] mem = 131072K (0x8000000)
Apr 17 20:34:37 ivana unix: [ID 930857 kern.info] avail mem = 122445824
Apr 17 20:34:37 ivana rootnex: [ID 466748 kern.info] root nexus = Oracle Ultra 5/10 UPA/PCI (UltraSPARCIIi 360MHz)
Apr 17 20:34:37 ivana rootnex: [ID 349649 kern.info] pcipsy0 at root: UPA 0x1f 0x0
Apr 17 20:34:37 ivana genunix: [ID 936769 kern.info] pcipsy0 is /pci@1f,0
Apr 17 20:34:37 ivana pcipsy: [ID 370704 kern.info] PCI-device: pci@1,1, simba0
Apr 17 20:34:37 ivana genunix: [ID 936769 kern.info] simba0 is /pci@1f,0/pci@1,1
Apr 17 20:34:37 ivana pcipsy: [ID 370704 kern.info] PCI-device: pci@1, simba1
Apr 17 20:34:37 ivana genunix: [ID 936769 kern.info] simba1 is /pci@1f,0/pci@1


These entries show the entries created during boot time for an Ultra 5 system. Only
NOTICE and INFO messages are shown for the kernel (KERN). Each message
comprises a timestamp, hostname (ivana), a unique ID for each message, and the
facility and priority number separated by a period (such as kern.info for the KERN
facility at the INFO level), and the message. For example, the message:

Apr 17 20:34:37 ivana unix: [ID 930857 kern.info] avail mem = 122445824


shows that on April 17th are 8:34 pm, on system ivana, the unix kernel logged an
information message that 122445824 bytes of RAM was available (116M free). If a

kernel module generates the message, then its name will be printed instead of
unix after the hostname. Some examples included in this output include the
modules genunix, rootnex, and pcipsy. Identifying modules that cause errors can
assist in debugging system crashes and unexpected system activity, particularly
during booting.

Exercise 11A: Syslog daemon


Read the man page for the syslogd command. Make a list of all the available options
and summarize their functions.

Theme 11.2: Syslog configuration


The /etc/syslog.conf file is responsible for configuring the logging activity of the
syslog daemon. It contains a list of facility name and priority code combinations on
the left hand side, and an associated action on the right hand side. This allows
system logging to be configured very precisely, with different users notified of
different events, or different logfiles being setup for different facilities or priority
codes. For example, all KERN messages can be redirected to a single file, or all
EMERG messages can be directed to the super-user.

A basic syslog.conf file looks like this:

$ cat /etc/syslog.conf
*.err;kern.notice;auth.notice /dev/sysmsg
*.err /var/adm/messages
*.alert pwatters
*.info root
*.emerg *


This file specifies that all ERR, KERN.NOTICE and AUTH.NOTICE messages
should be redirected to the console, and any other devices specified by the
/dev/sysmsg device. Note that the wildcard character (*) is used to specify all ERR
level messages. Multiple actions can be associated with each facility level. For
example, the second line indicates that all ERR level messages should be written to
the /var/adm/messages file, as well as being written to the console as specified by
the first line. The third line specifies that all ALERT messages should be sent to the
user pwatters, and the next states that all INFO messages should be sent to the root
user. Finally, all EMERG messages should be broadcast to all users.

A more advanced syslog.conf file looks like this:

$ cat /etc/syslog.conf
*.notice /var/log/notice
*.info /var/log/info
*.crit /var/log/crit
*.err /var/log/err


Here, we can see that the NOTICE, INFO, CRIT and ERR messages are being

redirected to their own log files. This allows easy access to different facility level
messages without having to use the grep command, which can be time saving when
filtering large files.

Exercise 11B: Syslog configuration


Read the man page for the syslog.conf file. Make a list of all the available options
and summarize their functions.
Read the man page for the /dev/sysmsg device. Make a list of all the available
options and summarize their functions.

Theme 11.3: Using syslog


The syslog facility is most useful when it is filtered. This is because, for any
particular day, a large number of entries may be created. One way of filtering this
material interactively is to use the tail command. For example, if all ERR messages
were stored in the file /var/adm/messages.err, the following command could be used
to interactively monitor new entries being recorded:

# tail f /var/adm/messages.err


However, since most administrators do not monitor these files 24 hours per day, an
automated approach to extracting pertinent messages must be devised. The
following script shows how to use the date, cut and grep commands to extract all
messages for a particular string, recorded today:

$ cat filter_syslog.sh
#!/bin/sh
# filter_syslog.sh
# Takes parameter $1 as a string to be searched for in /var/adm/messages
# for the current date
DATE=`date | cut -f2,3 -d `; export DATE
grep $DATE /var/adm/messages | grep $1


The script works by reading a date stamp from the system, and extracting todays
month and day using cut (columns 2 and 3 of the output of the date command). This
exported to an environment variable called $DATE. The next command then
searches the /var/adm/messages file for entries containing the day and month
contained in $DATE, and then filters the entries further for the string supplied on
the command-line. To use the script, you need to supply a string to search for on the
command-line. For example, to search for all entries containing mail.alert, the
following command could be used:

$ filter_syslog.sh mail.alert
Jun 10 08:52:56 ivana sendmail[213]: [ID 702911 mail.alert] unable to qualify my own domain name (ivana)
using short name


Here, we can see only one mail.alert entry relating to a name service problem. If this
script was run once every day, the administrator would automatically gain a list of
issues to be resolved.

Exercise 11C: Syslog configuration


Read the man page for the grep command. Make a list of all the available searching
and pattern matching options and summarize their functions.
Read the man page for the date command. Make a list of all the available display
options and summarize their functions.
Read the man page for the cut command. Make a list of all the available column and
row matching options and summarize their functions.

Theme 11.4: Syslog and the command-line


Most administrators think of the syslog as a facility that is used by existing system
applications and services. However, a command-line tool called the logger is also
available for user by users and in scripts. The logger command allows messages to
be inserted into the system log with different facilities and priorities. For example,
if you wrote an intrusion detection application which searched for patterns of
system use consistent with an attack, then it might issue the following command
from within its script:

logger -p daemon.crit **** INTRUDER DETECTED on pts/3


This would result in the following entry being inserted into the /var/adm/messages
file:

Jun 10 10:16:44 ivana pwatters: [ID 702911 daemon.crit] **** INTRUDER DETECTED on pts/3

Here, we can see that the event has been recorded with the daemon.crit level. By
using the filter_syslog.sh script, an administrator could check to see whether new
entries have been added each day or every hour.


Exercise 11D: Syslog configuration


Read the man page for the logger command. Make a list of all the available options
and summarize their functions.

Assignment 11.1: Monitoring syslog


Description
Students should use the grep command to create a list of all telnet sessions logged in the
system log.

Procedure
1. Login to your system as root.
2. Use the grep command to create a list of all telnet sessions logged in the system log.

Assignment 11.2: Modifying syslog.conf


Description
Students should create a syslog.conf that logs all possible system.

Procedure
1.
2.
3.
4.

Login to your system as root.


Create a new syslog.conf file by using the touch command.
Edit the syslog.conf file by using vi.
Insert entries that allow all possible system activities to be logged.

Module 12 - Disk Management and Pseudo File Systems


Overview
Data management is a key role of an enterprise system administrator. Whether dealing
with user files or database systems, ensuring data preservation and guaranteeing data
availability are two major tasks assigned to administrators. Solaris provides a number of
different ways to manage data through its use of file systems, which are the logical
representation of underlying disk subsystems. However, while the simplest approach to
storing data on file systems just creates them on top of disk slices, a further set of
abstractions is possible by using volume management. Volume managers, like Solaris
volume management, allow a Redundant Array of Inexpensive Disks (RAID) levels, such
as mirroring and/or striping, to be implemented. Mirroring ensures that if a disk fails, its
contents can be retrieved from a second disk who acts as a mirror. This requires 2N
physical disks for every N logical disks required by the system. Striping allows a logical
disk volume to be defined across a number of different file systems. This permits a single
disk volume to be defined which has a very large logical size, with very little overhead.
For example, 8 x 36G drives could be combined using striping to create a single virtual
drive with 288G capacity.

In this module, you will learn how to manage logical disk and disk volumes. In addition,
advanced file system management skills, like adding virtual memory and administering
the /proc and pseudo file systems, will be covered.

Learning Outcomes
Upon successful completion of this course, students will be able to:

State the steps required to create, check, and mount file systems.
Describe the differences between physical disk devices and disk metadevices.
List the steps required to create disk volumes using Solaris Volume Manager.
State the properties of a pseudo file system.
Describe the commands used to operate on the /proc file system.
State the steps required to add virtual memory to the system.

Path to Complete the Module


For best results, you may wish to follow the course authors suggested path as outlined
below.

1.
2.
3.
4.
5.
6.
7.
8.

Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: File systems.
Read Theme 2: Volume management.
Read Theme 3: Using Volume Manager.
Read Theme 4: The proc file system.
Read Theme 5: Virtual memory.
Complete Assignment 12.1: Using fsck.
Complete Assignment 12.2: Creating virtual memory.

Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.

Oracle Solaris Administration: Common Tasks, Chapter 19


(https://docs.oracle.com/cd/E23824_01/html/821-1451/docinfo.html#scrolltoc)

Theme 12.1: File systems


Typical file system management operations include the following:

Creating new file systems by using mkfs or newfs commands
Checking file system integrity by using the fsck command
Mounting file systems by using the mount command

A new file system can be created by using the mkfs or newfs commands. The newfs
command provides a simple user interface for creating a UNIX file system (UFS), rather
than using the mkfs command, which can be used to create file systems of different types.
Most Solaris systems only contain UFS file systems, although sometimes MS-DOS file
systems (PCFS) may also be required for diskettes shared with systems running Microsoft
Windows.

To create a new UFS file system, simply execute the newfs command with the raw device
name (e.g., /dev/rdsk/c0t0d0s0) on the command-line:

# newfs -v /dev/rdsk/c0t0d0s0
mkfs -F ufs -o N /dev/rdsk/c0t0d0s0 3533040 63 16 8192 1024 32 3 90 4096 t 0 1 8 16
/dev/rdsk/c0t0d0s0: 3533040 sectors in 3505 cylinders of 16 tracks, 63 sectors
1725.1MB in 110 cyl groups (32 c/g, 15.75MB/g, 3904 i/g)
super-block backups (for fsck -F ufs -o b=#) at: 32, 32352, 64672, 96992, 129312, 161632, 193952, 226272, 258592,
290912, 323232, 355552, 387872, 420192, 452512, 484832, 516128, 548448, 580768, 613088, 645408, 677728,
710048, 742368, 774688, 807008, 839328, 871648, 903968, 936288, 968608, 1000928, 1032224, 1064544, 1096864,
1129184, 1161504, 1193824, 1226144, 1258464, 1290784, 1323104, 1355424, 1387744, 1420064, 1452384, 1484704,
1517024, 1548320, 1580640, 1612960, 1645280, 1677600, 1709920, 1742240, 1774560, 1806880, 1839200, 1871520,
1903840, 1936160, 1968480, 2000800, 2033120, 2064416, 2096736, 2129056, 2161376, 2193696, 2226016, 2258336,
2290656, 2322976, 2355296, 2387616, 2419936, 2452256, 2484576, 2516896, 2549216, 2580512, 2612832, 2645152,
2677472, 2709792, 2742112, 2774432, 2806752, 2839072, 2871392, 2903712, 2936032, 2968352, 3000672, 3032992,
3065312, 3096608, 3128928, 3161248, 3193568, 3225888, 3258208, 3290528, 3322848, 3355168, 3387488, 3419808,
3452128, 3484448, 3516768


Here, we can see a list of the disk blocks where a backup of the super-block is created.
Thus, if the super-block is corrupted, it can be read from another block. The file system
created was 1725.1M in size, occupying 3533040 sectors in 3505 cylinders of 16 tracks.
For reference, the newfs command also displays the equivalent parameters that could be
used to create the file system by using the mkfs command:

# mkfs -F ufs -o N /dev/rdsk/c0t0d0s0 3533040 63 16 8192 1024 32 3 90 4096 t 0 1 8 16

Once a file system has been created, logging should be enabled to ensure that the file
system can be recovered in the system crashes. If logging is not enabled in the /etc/vfstab
file for each volume, then file systems can be recovered by using the fsck command.
However, since the fsck command is usually run at boot time, this can significantly extend
the amount of time required for booting. Note that fsck should never be used on a mounted
file system.

To mount a file system, the mount command is used. A mount point must be created for a
file system before it is mounted. The following command sequence creates a mount point
/data, and then mounts a UFS file system c0t0d0s5 on /data:

# mkdir /data
# mount /dev/dsk/c0t0d0s5 /data


Only the super-user can mount file systems directly. To check which file systems have
already been mounted, the mount command can be used without any options:

# /sbin/mount
/ on /dev/dsk/c0t0d0s0 read/write/setuid/intr/largefiles/onerror=panic/dev=2200000 on Mon Jun 10 08:51:25 2002
/proc on /proc read/write/setuid/dev=31c0000 on Mon Jun 10 08:51:24 2002
/dev/fd on fd read/write/setuid/dev=3280000 on Mon Jun 10 08:51:26 2002
/etc/mnttab on mnttab read/write/setuid/dev=3380000 on Mon Jun 10 08:51:28 2002
/var/run on swap read/write/setuid/dev=1 on Mon Jun 10 08:51:28 2002
/tmp on swap read/write/setuid/dev=2 on Mon Jun 10 08:51:30 2002
/export/home on /dev/dsk/c0t0d0s7 read/write/setuid/intr/largefiles/onerror=panic/dev=2200007 on Mon Jun 10
08:51:30 2002

Exercise 12A: File systems


Read the man page for the mount command. Make a list of all the available options
and summarize their functions.

Theme 12.2: Volume management


Enterprise systems must be continuously available to fulfill their role as back-end servers.
However, the problem faced by all enterprise systems is hardware failure hard disks,
CPUs and system boards all have a Mean Time To Failure (MTTF) which is the average
life time which can be expected from each component type. Thus, if a particular type of
hard disk has a MTTF of two years, you can expect half of your disks to fail within two
years. Since disks contain valuable data that takes a long time to restore from tape
backups, its obviously more effective to increase the reliability of disk systems. One way
to do this is to implement disk mirroring, which ensures that data is written concurrently
to two disks. Thus, if one disk fails, the other disk can be used to read/write data, and the
systems operations are unaffected because one device is still active. The failed disk can
then be repaired and replaced while the system continues its work. When a new disk is
installed, it can be updated to contain the same data as the disk which has continued
operating, until the mirroring relationship is bought up to date. Many SPARC systems
allows disks to be hot swapped in this way thus, when a hard drive fails, the failure
does not bring down the entire system. For large systems like the E10000, with 64 CPUs
operating, and up to 16 virtual domains, its clearly important to continuously maintain
operations.

Mirroring is one key function of volume management. The other function is striping,
where a number of disks are logically combined to form a single virtual disk. This allows
applications like database servers to address a logical volume as a single entity rather than
having to create a separate interface to deal with each underlying physical disk.

Both mirroring and striping are supported by the Redundant Array of Inexpensive Disks
(RAID) scheme, where a number of levels are associated with different volume types.
RAID level 0 is equivalent to striping, while RAID level 1 is mirroring. The Volume
Manager package supplied with Solaris supports both striping and mirroring. In order to
support RAID, Volume Manager requires that a set of state database replicas be created
using the metadb command[2]. This allows control and state data to be stored redundantly
across a number of disks, maximizing the chances of complete data recovery. For
example, to create a state database replica on the file systems c0d0t0s5 and c1d0t0s5, the
following command would be used:

# metadb c 3 a f /dev/dsk/c1d0t0s5 /dev/dsk/c0d0t0s5


Note that the state databases are now replicated across two different controllers (c1 and
c0) maximizing redundancy.

Exercise 12B: Volume management


Read the man page for the metadb command. Make a list of all the available options
and summarize their functions.

Theme 12.3: Using Volume Manager


Once state databases exist on file systems that are involved in volume management, it is
then possible to create metadevices and their associated virtual disk devices. There are two
basic configurations that can be created by using the metainit command: striping and
mirroring. To create a striped set of two 36G disks, creating a virtual disk of 72G capacity,
the following entry could be made in the md.tab configuration file:

d1 2 1 c1t0d0s5 1 c2t0d0s5


Here, the two partitions c1t0d0s5 and c2t0d0s5, running on separate controllers, can be
combined to form the virtual disk d1. To initialize the d1 volume, and mount it on /data,
the following command sequence can be used:

# metainit d1
# newfs /dev/md/rdsk/d1
# mkdir /data
# mount /dev/md/dsk/d1 /data


Alternatively, if you wanted to create a mirrored virtual file system called d2, by writing to
both c2t0d0s5 and c1t0d0s5 concurrently, the following definitions would need to be
entered into md.tab:

d2 m /dev/md/dsk/d3 /dev/md/dsk/d4
d3 1 1 /dev/dsk/c1t0d0s5
d4 1 1 /dev/dsk/c2t0d0s5

While d2 is the virtual disk device for the mirrored device, each individual disk must also
have a virtual counterpart (d3 maps to /dev/dsk/c1t0d0s5 and d4 maps to
/dev/dsk/c2t0d0s5). To initialize the mirrored file system to operate as /oracle, the
following command sequence would be used:

# metainit d2
# metainit d3
# metainit d4
# newfs d2

# newfs d3
# newfs d4
# mkdir /oracle
# mount /dev/md/dsk/d2 /oracle

Exercise 12C: Using Volume Manager


Read the man page for the metainit command. Make a list of all the available
options and summarize their functions.

Theme 12.4: The proc file system


The proc file system stores, on disk, a set of files that relate to every process and
lightweight process that is running on a system. The following data is stored for each
process and lightweight process:

Address space (as)
Control data (ctl)
Credential file (cred)
File descriptor (fd)
Local file descriptor table (ldt)
Paging data (pagedata)
Process data (psinfo)
Real memory map (rmap)
Root directory (root)
Signal data (sigact)
Status information (status)
Virtual memory map (map)
Working directory (cwd)

You can access the process data directly for each PID by changing to that processs
directory underneath /proc. For example, if a command had the PID 256, then the
following files would be contained underneath /proc/256:

# ls -l
total 3565
-rw- 1 root root 1802240 Jun 10 08:54 as
-r 1 root root 152 Jun 10 08:54 auxv
-r 1 root root 32 Jun 10 08:54 cred
w- 1 root root 0 Jun 10 08:54 ctl
lr-x 1 root root 0 Jun 10 08:54 cwd ->
dr-x 2 root root 1040 Jun 10 08:54 fd
-rrr 1 root root 120 Jun 10 08:54 lpsinfo
-r 1 root root 912 Jun 10 08:54 lstatus
-rrr 1 root root 536 Jun 10 08:54 lusage
dr-xr-xr-x 3 root root 48 Jun 10 08:54 lwp
-r 1 root root 2400 Jun 10 08:54 map
dr-x 2 root root 544 Jun 10 08:54 object
-r 1 root root 2776 Jun 10 08:54 pagedata
-rrr 1 root root 336 Jun 10 08:54 psinfo

-r 1 root root 2400 Jun 10 08:54 rmap


lr-x 1 root root 0 Jun 10 08:54 root ->
-r 1 root root 1440 Jun 10 08:54 sigact
-r 1 root root 1232 Jun 10 08:54 status
-rrr 1 root root 256 Jun 10 08:54 usage
-r 1 root root 0 Jun 10 08:54 watch
-r 1 root root 3800 Jun 10 08:54 xmap


There are several commands that can be used to make sense of this data, including:

pflags prints tracing flags
pcred displays process credentials
pmap prints the address space map
pldd displays a list of libraries being used
psig prints current process signals
pstack displays a stack trace
pfiles lists a set of open file details
pwdx displays the current working directory
ptree prints a process tree

Exercise 12D: The /proc file system


Read the man page for the pflags command. Make a list of all the available options
and summarize their functions.
Read the man page for the ptree command. Make a list of all the available options
and summarize their functions.

Theme 12.5: Virtual memory


Virtual memory is disk blocks that can be read and written to as if they were Random
Access Memory (RAM). Virtual memory is typically used by systems that dont have
sufficient physical memory available to carry out their operations. Clearly, since disk write
speeds are much slower than RAM access speeds, this involves a significant reduction in
I/O performance. However, if you cant afford more RAM, then virtual memory may be
your only option.

To install virtual memory on a system, a file must be created with a specific capacity, such
as 10G. This will then allow the file to be used as a virtual memory device. Alternatively,
a specific raw partition can be set aside for use as a virtual memory store. To create a file
/swap with 100M capacity, for use as a virtual memory store, the following command can
be used:

# mkfile 100m /swap


To mount the file as virtual memory, the following command can be used:

# swap a /swap


To report on current available virtual memory, the following command can be used:

# swap -l
swapfile dev swaplo blocks free
/dev/dsk/c0t0d0s1 136,1 16 1049312 1049312
/swap - 16 2032 2032

This output shows the number of free and used blocks for all virtual memory devices.

Exercise 12E: Virtual memory


Read the man page for the swap command. Make a list of all the available options
and summarize their functions.

Assignment 12.1: Using fsck


Description
Students should use fsck to check at least one filesystem.

Procedure
1. Login to your system as root.
2. Use fsck to check at least one filesystem

Assignment 12.2: Creating virtual memory


Description
Students should add a swap file to the system.

Procedure
1.
2.
3.

Login to your system as root.


Create a new swap file of 20M using the mkfile command.
Add the swap file to the system.

Module 13: Processes, Threads and CPU Scheduling


Overview
Solaris is based on a multi-user, multi-process and multi-threaded processing model. This
model requires the CPU to operate in various modes and to undertake various kinds of
scheduling, including real time scheduling. This module aims to explore processes and
threads in depth, and how they can be tuned to improve performance. Starting with an
examination of the relationship between threads and the processes that spawn them, key
issues such as locking and interrupt levels will be covered. Process monitoring tools, such
as top, will be investigated.

At the kernel level, CPU scheduling becomes a key issue in system performance when
running complex multi-threaded applications like Java application servers. Scheduling
classes and priorities set the order in which tasks are performed. Tools such as priocntl and
mpstat can be used to monitor and modify the way in which scheduling is performed. This
module will examine how to use the Solaris Resource Manager which provides an easy-touse front-end for scheduling management and related activities.

Learning Outcomes
Upon successful completion of this course, students will be able to:

Describe how to use the ps command for process monitoring.
State the key properties of processes and threads in a multiprocess, multithreaded
system
Identify the role of interrupt levels and the lockstat program
State the key properties of real-time scheduling and scheduling classes
Describe the role of processor sets
Identify the steps required to monitor CPU activity
Describe how to use the Solaris Resource Manager

Path to Complete the Module


For best results, you may wish to follow the course authors suggested path as outlined
below.

1.
2.
3.
4.
5.
6.

Read Theme 1: Process and Threads.


Read Theme 2: Process Monitoring.
Read Theme 3: Real-time Scheduling.
Read Theme 4: CPU Monitoring.
Complete Assignment 13.1: DNLC.
Complete Assignment 13.2: Inode statistics.

Readings
Oracle Solaris Administration: Common Tasks, Chapter 20
(https://docs.oracle.com/cd/E23824_01/html/821-1451/docinfo.html#scrolltoc)

Theme 13.1: Processes and Threads


A process is a discrete job that is executed by a user and is identified by a unique Process
ID (PID). When a user executes a process, no unprivileged users may interfere with that
process they own the process, much like a user owns a file. Note that there is no
concept of process access permissions allowing group members or other users to
communicate with a users processes, although processes are associated with the GID of
the executing user.

Single Solaris supports multiple concurrent users, many different users can execute
processes at the same time. In addition, each process can spawn a number of lightweight
processes (or threads). The use of threads minimizes the overhead associated with creating
and killing processes which incurs a relatively large overhead compared to threads. Users
interact with their processes by sending signals through a programming API or directly on
the command line by using the kill command.

One of the best features of the process model is the ability for the super-user to assign
execution priorities to each process on the system. Thus, more urgent tasks can be granted
priority over less urgent tasks. In addition, multiprocessor systems can allocate one or
more processes to execute a single process or set of processes.


Exercise 13A: Processes and Threads


Make a list of five advantages that multi-process operating systems have over single
process operating systems.
Make a list of five advantages that multi-threaded operating systems have over
single-threaded operating systems.

Theme 13.2: Process Monitoring


The list of processes running on a system is visible to all users and can be generated by
using the ps command. By default, the ps command only shows the processes for the
currently logged-in user, as shown in the following example:

$ ps
PID TTY TIME CMD
26923 pts/8 0:00 tcsh
26934 pts/8 0:00 newmail


In this example, the user has two processes running (26923 and 26934), both spawned
from terminal 8, and which have both executed minimal amounts of CPU time. The
applications running are the Cornell shell (tcsh) and the newmail command the latter is
running in the foreground, while the latter is running in the background.

The ps command has many options. For example, to display a list of all processes running
on a system., the ps A command can be used as follows:

$ ps -A
PID TTY TIME CMD
0 ? 0:13 sched
1 ? 0:50 init
2 ? 0:03 pageout
3 ? 250:35 fsflush
562 ? 0:00 sac
345 ? 0:01 xntpd
255 ? 0:00 lockd
62 ? 0:00 sysevent
64 ? 0:00 sysevent
374 ? 0:00 dptelog
511 ? 0:00 keyserv
291 ? 0:11 cron
212 ? 0:00 in.ndpd
336 ? 0:17 utmpd


To display a full listing for all processes, the ps Af command can be used:


$ ps -Af
UID PID PPID C STIME TTY TIME CMD
root 0 0 0 Apr 11 ? 0:13 sched
root 1 0 0 Apr 11 ? 0:50 /etc/init root 2 0 0 Apr 11 ? 0:03 pageout
root 3 0 0 Apr 11 ? 250:35 fsflush
root 562 1 0 Apr 11 ? 0:00 /usr/lib/saf/sac -t 300
root 345 1 0 Apr 11 ? 0:01 /usr/lib/inet/xntpd
root 255 1 0 Apr 11 ? 0:00 /usr/lib/nfs/lockd
root 62 1 0 Apr 11 ? 0:00 /usr/lib/syseventd
root 64 1 0 Apr 11 ? 0:00 /usr/lib/syseventconfd
root 374 1 0 Apr 11 ? 0:00 /opt/ORACLEWhwrdg/dptelog
root 511 1 0 Apr 11 ? 0:00 /usr/sbin/keyserv
root 291 1 0 Apr 11 ? 0:11 /usr/sbin/cron
root 212 1 0 Apr 11 ? 0:00 /usr/lib/inet/in.ndpd
root 336 1 0 Apr 11 ? 0:17 /usr/lib/utmpd


Here, we can see the command names associated with each of the processes being
executed.

Exercise 13B: Process Monitoring


Use the ps command to display the list of all processes running on the system. Add
up the total CPU time consumed by all currently running processes.

Theme 13.3: Real-time Scheduling


The first process to be spawned on a Solaris system is always the scheduler (PID 0). This
process allows real time scheduling of all other processes to occur. Once the scheduler is
running, the init process is spawned (PID 1). The init process is the ultimate Parent PID
(PPID) for all spawned processes on the system. For example, if you executed a process
with PID of 257, which in turned spawned a process with a PID of 358, then the PPID of
358 is 257, and the PPID of 257 is 1. If PID 257 is killed, then the PPID of 358 reverts to
1. PID 0 and 1 are shown in the following ps output:

$ ps -A
PID TTY TIME CMD
0 ? 0:00 sched
1 ? 0:00 init


In this example, a question mark ? in the TTY column indicates that the process is not bound to any specific terminal.
The ps command displays the process list in scheduler format as shown below:


# ps -c

PID CLS PRI TTY TIME CMD
290 TS 40 pts/2 0:00 sh
295 TS 48 pts/2 0:00 bash
299 TS 58 pts/2 0:00 ps


In this example, a priority and priority class value are displayed. The class can be one of
the following:

SYS the System Class

TS the Time Sharing class, with a configured user priority range of -60 through 60
IA - the Interactive Class with a configured user priority range of -60 through 60

The long format of the command displays even more process characteristics related to
scheduling:

# ps -clf
F S UID PID PPID CLS PRI ADDR SZ WCHAN STIME TTY TIME CMD

8 S root 290 289 TS 40 ? 38 ? 19:34:25 pts/2 0:00 sh


8 R root 295 290 TS 48 ? 301 19:34:27 pts/2 0:00 ls
8 O root 369 295 TS 58 ? 130 19:35:58 pts/2 0:00 ps


The format here reflects scheduler properties specified by priocntl, which is a command
that prints or sets real-time scheduling parameters for processes. You can retrieve a list of
all classes supported by the system by using the following command:

# priocntl -l
CONFIGURED CLASSES
==================
SYS (System Class)
TS (Time Sharing)
Configured TS User Priority Range: -60 through 60
IA (Interactive)
Configured IA User Priority Range: -60 through 60


The System Class, or real time class, allows processes to be run with absolute priority on a
system, without regard to the requirements of other processes. This is very useful where a
real time controller or some other external device type needs to be supported with
temporal resolution for data collection or control. In this instance, the system acts more
like a single user system. However, its more usual for UNIX processes to be time-sharing,
since this is the basis of a multi-user system, given algorithms that ensure that fair
distribution of CPU time amongst a set of processes competing for a scarce resource.


Exercise 13C: Real-time Scheduling


Create a list of all scheduling classes supported by your system. Use the ps
command to display a list of all processes and their classes.

Theme 13.4: CPU Monitoring


An important view of process activity and CPU load is provided by the top command:

last pid: 4348; load averages: 1.28, 1.20, 1.21 15:40:11
344 processes: 333 sleeping, 8 zombie, 1 stopped, 2 on cpu
CPU states: 54.6% idle, 22.3% user, 17.4% kernel, 5.7% iowait, 0.0% swap
Memory: 2048M real, 1158M free, 1035M swap in use, 11G swap free

PID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND
20890 jdoe 1 0 19 1208K 864K cpu/2 458.5H 24.89% a.out
4266 pwatters 1 52 0 3128K 2272K cpu/1 0:00 0.58% top
4321 jjfrost 1 60 0 2800K 2216K sleep 0:00 0.15% imapd
307 root 39 52 0 17M 9648K sleep 21:24 0.07% nscd
572 dnscache 1 58 0 3064K 2336K sleep 10:49 0.04% dnscache
4155 jbloggs 1 58 0 2792K 2200K sleep 0:00 0.04% imapd
4153 jbloggs 1 52 0 2792K 2200K sleep 0:00 0.03% imapd
573 dnslog 1 58 0 1024K 704K sleep 4:34 0.02% multilog
569 root 16 59 0 110M 76M sleep 45:09 0.02% squid
290 root 23 58 0 5320K 2576K sleep 10:06 0.02% syslogd
18917 root 1 59 0 3576K 2440K sleep 0:06 0.02% sshd
19163 root 1 49 0 5504K 3296K sleep 0:01 0.02% smbd
4237 jtintern 1 54 0 2792K 2184K sleep 0:00 0.02% imapd
4154 jbloggs 1 60 0 2800K 2216K sleep 0:00 0.02% imapd
559 root 1 58 0 2832K 1344K sleep 12:04 0.02% sshd


Key columns in the top command include THR (number of threads spawned by a process),
PRI (process priority), NICE (process nice value), SIZE (process size), RES (amount of
application data resident in memory), and STATE (run state or sleep). A summary of
system load data is also provided, including CPU and memory load.

Load averages are also provided as part of the w command:

$ w
3:40pm up 31 day(s), 22:37, 41 users, load average: 1.42, 2.01, 2.15
User tty login@ idle JCPU PCPU what
jones pts/1 7:56am 24 1:26 24 /bin/ispell -a -m -B

yang pts/2 Fri 5pm 23:33 16 14 mailtool


To virew the status of all CPUs installed on a system, the psrinfo command can be used:

$ psrinfo
0 on-line since 04/11/02 04:09:58
1 on-line since 04/11/02 04:09:59
2 on-line since 04/11/02 04:09:59
3 on-line since 04/11/02 04:09:59

Exercise 13D: CPU Monitoring


Write a shell script to monitor CPU usage every hour. Use gnuplot or a similar
graphics program to generate a plot of CPU activity values over a 24 hour period.

Assignment 13.1: DNLC


Description
Students should run the sar command to monitor DNLC attribute cache rates. In 500
words or less, interpret the output.

Procedure
1.
2.
3.
4.
5.

Login to your system as root.


Execute the sar command.
Record the output from the command.
Read the sar man page.
Write a 500 word report explaining the output.

Assignment 13.2: Inode Statistics


Description
Students should run the netstat command to display the inode statistics. In 500 words or
less, interpret the output.

Procedure
1.
2.
3.
4.
5.

Login to your system as root.


Execute the netstat command.
Record the output from the command.
Read the netstat man page.
Write a 500 word report explaining the output.

[1] Note that this administration guide is now six years out-of-date, and only current to Solaris 10
[2] Note that you may have to install the Volume Manager package manually using the command: pkg install
storage/svm

You might also like