Professional Documents
Culture Documents
UNIX System Administration with Solaris 11.3:
A Course for Beginners
Professor Paul A. Watters
Readings
Theme 1.1: Key Features of Solaris
Features of UNIX
Readings
Theme 2.1: OpenBoot PROM Monitor
Theme 2.2: SPARC Computer Systems
Theme 2.3: Solaris Installation
Theme 2.4: Package Management
Theme 2.5: Patch Management
Assignment 2.1: Solaris Key Benefits
Description
Procedure
Learning Outcomes
Path to Complete the Module
Readings
Theme 3.1: Run Levels
Exercise 3A: Run Levels
Readings
Theme 4.1 - Regular Expressions
Exercise 4A: Regular Expressions
Readings
Theme 5.1: File System Overview
Disk Devices
File Systems
Exercise 5A: File System Overview
Procedure
Readings
Theme 6.1: Booting
Exercise 6A: Booting
Overview
Learning Outcomes
Path to Complete the Module
Readings
Theme 7.1: Format
Exercise 7A: Format
Readings
Theme 8.1: Basic Commands
Exercise 8A: Basic Commands
Readings
Theme 9.1: Servers
Exercise 9A: Servers
Readings
Theme 10.1: OSI Stack
Exercise 10A: OSI Stack
Readings
Theme 11.1: Syslog daemon
Exercise 11A: Syslog daemon
Readings
Theme 12.1: File systems
Exercise 12A: File systems
Readings
Theme 13.1: Processes and Threads
Exercise 13A: Processes and Threads
Overview
This section is a general welcome and a basic description of the course.
Learning Outcomes
Upon successful completion of this course, students will be able to:
Describe key Solaris operating system concepts.
State the steps required to operate with the OpenBoot PROM monitor.
Install a Solaris system.
Initialize a Solaris system for user access.
Manage users and groups.
Implement security and process control strategies.
Administer files, directories and file systems.
Manage booting and disk configuration.
Administer disks.
Perform backup and restore operations.
Execute basic commands.
Use the vi editor.
Remotely access client systems.
Course Author
Meet the course author, Dr Paul A. Watters.
Getting Started
This course is intended to be a basic introduction to UNIX system administration. It is not
intended to be en encyclopedic reference; it is designed to introduce the UNIX system to
you, and equip you with basic skills to manage and run your own systems. You will learn
to use Solaris 11.3, the latest version of the UNIX operating system. The goal is to get you
used to working in a UNIX-like way rather than teaching you every possible command or
technique. Note that advanced topics like zones and ZFS are not covered in this, but all
commands have been tested on Solaris 11.3.
Paul A. Watters received his PhD in computer science from Macquarie University,
Sydney, Australia. He has also earned degrees from the University of Cambridge,
University of Tasmania, and the University of Newcastle. Dr. Watters has written several
books on the Solaris operating environment, including Solaris 8: The Complete Reference,
Solaris Administration: A Beginners Guide, Solaris 8 All-In-One Certification Guide, and
Solaris 8 Administrators Guide.
After a stint dealing with security and privacy of electronic health records at the Medical
Research Council in the United Kingdom, Dr Watters moved to the University of Ballarat
in 2008, to become the first Research Director of the Internet Commerce Security
Laboratory (ICSL), a partnership between Westpac, IBM, the State Government of
Victoria, and the Australian Federal Police (AFP). The ICSLs goal was to build capability
in the cybercrime field, and to make Victoria the state of choice to undertake this type of
work. In addition to numerous research publications, and skilled graduates who now
protect Australias cyber frontline, the ICSL also produced significant outcomes for its
research partners in the areas of threat mitigation (phishing, malware, identity theft,
scams, piracy, child exploitation) and intelligence gathering. Dr Watters undertook
consultancies for numerous external clients, including the Australian Federation Against
Copyright Theft (AFACT), the Attorney Generals Department (AGD) and Google. While
on sabbatical with the AFP, he developed an approach to detecting drug deals online.
In 2013, Dr Watters took up a Professorship in IT at Massey University in New Zealand.
He continued his work in online threats, especially focusing on advertising as a vector for
malware delivery and social harms. He also won two Callaghan Innovation grants to
develop new algorithms for data analytics. He partnered with NGOs such End Child
Prostitution and Trafficking (ECPAT) to systematically examine the links between film
piracy and the proliferation of child abuse material online.
In 2015, Dr Watters also became an Adjunct Professor at Unitec Institute of Technology,
the home of New Zealands first cyber security research centre. In recognition of his track
record combating child abuse material online, he received an ARC Discovery grant in
Web-Based Readings
This course includes required online readings. You will access them from links within
each module where they are assigned. A list of the articles appears below.
Oracle Microsystems, System Administration Guide: Basic Administration. Available
freely at https://docs.oracle.com/cd/E19253-01/817-1985/817-1985.pdf.[1]
Whats new in Solaris 11.3. Available freely at
http://docs.oracle.com/cd/E53394_01/html/E54847/index.html.
Course Specific Technology Requirements
Students will need to obtain access to a Solaris system by using one of the following
methods:
1. Obtain Solaris for Intel from Oracle Microsystems
(http://www.oracle.com/technetwork/serverstorage/solaris11/downloads/index.html). Obtain software (such as Boot Magic) that
will allow Solaris for Intel to be installed on a PC and dual-booted, if required; or
2. Obtain Solaris on a SPARC system by purchasing a second-hand SPARC system
from EBay (www.ebay.com), and Solaris for SPARC from Oracle Microsystems.
New systems can be purchased new from Oracle for around $1,000; or
3. Create a free user account with a public access UNIX system like sdf.lonestar.org.
Course Structure
There are thirteen modules in this course, each of which includes specific readings and
assignments. Each module is briefly described below.
Module 1: System Concepts
This module introduces the concept of a Solaris system in the context of the enterprise.
Understanding the material in this module provides a basis for students to distinguish the
roles of system administrator and network administrator. In addition, students explore the
history and current hot topics in the Solaris operating environment and SunOS operating
system. Since industry certification is a key measure of the courses success in providing a
comprehensive introduction to enterprise systems, some exams tips and tricks will be
covered. Key concepts, including daemons, shells, file systems, and the kernel will be
covered in detail, from a theoretical perspective, as well as with full details of the Solaris
implementation. We also discuss pragmatic aspects of using Solaris, including how to
obtain on-line help.
Module 2: Boot PROM and System Installation
Many PC users are familiar with the operating system BIOS which controls various
bootstrapping and initialization issues. While SPARC hardware also features a system for
bootstrapping, known as the OpenBoot PROM monitor, this facility is far more complex
than a PC. For a start, the PROM monitor has a complete implementation of the Forth
programming language, making it possible to customize complex system settings. In
addition, the PROM monitor can be used to test and secure hardware, and prepare a
system for installation and configuration. Once a systems PROM monitor has been
configured, the Solaris operating environment, including the SunOS 5.11 operating system
(ie, Solaris 11.3), can be installed. However, preparing for and conducting an installation
requires some knowledge of the different hardware devices, types and systems supported
by Solaris, which are reviewed in this module. Finally, a complete walk through of the
installation process is presented.
Module 3: System Initialization and User Management
Once a system has been installed, various system initialization tasks need to be completed
before the system is ready to be deployed. Understanding what to configure at this stage is
as important as knowing how to configure it. Key concepts will be introduced, including
the notion of a system run level, and how to manage and modify the system
configuration and startup files. To use a system, users need to login using a username and
password. The basic processes behind authentication will be discussed in this module,
including practical issues like password selection. In addition, we investigate how to add,
delete and modify users and groups on the system by using command-line and GUI tools.
We also review how to examine the users who are logged into a system at any given time.
Module 4: Security and Process Control
Processes allow jobs to be performed on a Solaris system. They provide the envelope for
executing system calls, functions and other routines from within an application. Every
program running on a Solaris system, including user shells, must run as a process. Thus,
its critical to understand how to work with and manage processes. Solaris provides tools
to display information about processes, and send signals to active processes instructing
them to terminate or restart. Process monitoring tools are an important operational aspect
of managing a Solaris system. Linked with the concept of processes is security process
security, file security and user security. Every file on a Solaris file system has a
permissions string associated with it, allowing users, group members and all other users to
read, write and execute files, according to the permission string. From a security
perspective, its important to understand how file permissions can easily allow intruders
access to a system if not set appropriately. In addition, default file permissions and highlevel access control lists complement the standard UNIX file permission model on Solaris.
Module 5: Files, Directories and File systems
Processes allow jobs to be performed on a Solaris system. They provide the envelope for
executing system calls, functions and other routines from within an application. Every
program running on a Solaris system, including user shells, must run as a process. Thus,
its critical to understand how to work with and manage processes. Solaris provides tools
to display information about processes, and send signals to active processes instructing
them to terminate or restart. Process monitoring tools are an important operational aspect
of managing a Solaris system. Solaris provides a number of different tools which operate
on files and file systems, including the volume manager, which allows floppy disks and
CD-ROM discs to be mounted and unmounted by unprivileged users. In addition, a
number of compression programs can be applied to individual files to increase the amount
of space available for other applications. In this module, we will review all of the standard
Solaris tools that perform file operations.
Module 6: Booting and Disk Configuration
The booting process of a Solaris can be quite complex, since literally hundreds of services
can be started. This requires efficient use of CPU time and advanced memory
management. We discuss both of these issues with respect to the Solaris boot process, and
the various boot and shutdown commands that can be used to manage a Solaris system.
Services are started and stopped by using scripts in the /etc/init.d directory which we will
examine in detail. Before disks can be used to host file systems, as discussed in the
previous module, they need to be physically added to the system. This operation can either
be performed while the system has been powered down, or in real-time by using the
correct command sequence. This high availability option is one of the best features of
Solaris in a production environment, since downtime should never occur. We will
examine disk procedures closely, in addition to examining the different types of disk
device which map physical disk characteristics to logical system entities.
Module 7: Disks, Backup and Restore
Before disks can be used on a system, they must be formatted to ensure that no surface
errors exist that would prevent data being read and/or written correctly. The format
command is complex and contains a number of options, including surface analysis, which
are explained in this module. Once a disk file system has been created, it needs to be
backed up on a regular basis, by using a full or incremental dump. This ensures that, when
(not if) the disk eventually experiences a media failure, the contents of the disk can be
restored easily. In this module, the standard Solaris backup and restore procedures are
covered in depth.
Module 8: Basic Commands, Editors and Remote Access
Since much of the operation of a Solaris system involves command-line administration,
its important to become competent with using the shell and the various utilities that can
be used with pipelines and other logical operators. Students will learn the bulk of these
commands and shell logic in this module, although some aspects will have been covered
in previous chapters. Basic commands to create, delete or update files will be given.
Special emphasis will be placed on editing new and existing text files by using the visual
editor (vi). Remote access to a Solaris system allows multiple users to login concurrently,
spawn separate shells, and execute different jobs. After mastering all of the topics covered
in this course, these skills can finally be applied to solving real world problems by
allowing other users to login to a system, and provide services. This module covers the
basic aspects of TCP/IP networking required to manage and support remote services, and
discusses some of the key security issues associated with providing remote access. We
also cover the configuration of local and remote printing services.
Module 9: Clients and Servers
Solaris provides a solid foundation for client-server computing. In this module, you will
learn about common ways to host services, and some of the most frequently used clients,
especially for network services. Server and client installation and maintenance are covered
in detail.
Module 10: Solaris Network Environment
Networking provides a means for Solaris clients and servers to communicate with each
other. You will first learn about the conceptual Open Systems Interconnection (OSI) model
for networks, and then learn about the Transmission Control Protocol / Internet Protocol
(TCP/IP) stack.
Module 11: System Log Configuration
Accounting for application and user process utilization lays the foundation for billing and
capacity planning. In this module, you will learn how to configure system logging, and
how to monitor the syslog.
Module 12: Disk Management
Managing storage is a complex issue, since there are many different file formats and uses,
including virtual memory. In this module, you will learn about volume management, file
system repairs, and the /proc file system.
Assignments Overview
The course emphasizes pragmatic administration skills by encouraging students to become
familiar with the standard UNIX shells that allow user and administrator processes to be
spawned. Small assignments and quizzes form the basis for assessment. Students will also
be required to prepare a large paper on a topic related to UNIX systems administration,
emphasizing how UNIX assist in solving a specific industry problem. For example,
students might choose to write about the relationship between UNIX, Java and ecommerce, and how Solaris high availability is critical to ensure application scalability.
Learning Outcomes
Upon successful completion of this course, students will be able to:
Describe the key features of the Solaris operating environment and SunOS operating
system.
Define the roles and responsibilities of a Solaris system administrator and network
administrator.
Describe the requirements the three exams needed for certification.
Discuss strategies for exam success.
Describe daemons, shell, file system, kernel, operating system.
Utilize methods for obtaining help, including man pages.
Readings
You may wish to complete the readings for this module in the order suggested.
https://docs.oracle.com/cd/E53394_01/html/E54847/
Features of UNIX
Solaris systems, being an implementation of a UNIX system, generally share the
following features:
A kernel, which is the core of the operating system, written in the C language.
Applications interact with the kernel by using system calls.
Hardware devices are represented logically by device files.
File systems are hierarchical, providing a directory structure, and provide faultrecovery solutions like journaling.
Multi-user processing in a client/server environment allows multiple users to boot
from the same server. Thousands of users may perform operations on a single system
concurrently.
Multi-process architecture allows multiple applications and services to execute
concurrently.
Multi-thread architecture allows processes to create Light Weight Processes
(LWPs) that have much less overhead than individual processes, reducing resource
usage by discrete tasks.
A set of standard text and flat-file database processing tools that allow configuration
files to modified in a consistent way. Interestingly, after years of developing
proprietary binary format configuration files, many vendors now use text-based
XML (eXtensible Markup Language) for system configuration.
A consistent Character User Interface (CUI), provided by a user shell.
A consistent Graphical User Interface (GUI), provided by X11 and the Common
Desktop Environment (CDE).
Application architectures based on small, discrete programs or components that can
be logically sequenced to perform complex operations by using pipes, redirection
operators and other shell built-ins.
Application developer support is a priority, by providing easy-to-use APIs and
standard system libraries that are consistent with standard C libraries.
Kernel
The kernel is the core of the SunOS operating system it implements all of the
functionality that is necessary to support input/output and the process model. Many of the
higher level functions supported by the system, including Internet services, are executed
by daemons that are external to the kernel. Users interface with the kernel by spawning a
kernel when they login to the system. When data needs to be persisted, it is usually written
to a file system. At the heart of these more complex operations and services is the kernel.
The SunOS kernel has its roots in the Berkeley UNIX distribution (BSD), although its has
more recently become compliant with System V. The original UNIX kernel was written in
the C programming language - prior to UNIX, kernels were invariably written in
assembly language, which had to changed each time a new system architecture was
developed. Using C allowed a level of abstraction between hardware and system software
that rapidly increased the speed at which new systems could be programmed and
developed. Concurrent with these developments was the introduction of the Integrated
Circuit (IC), allowing memory chips and Central Processing Units (CPUs) to be mass
produced, at ever increasing operational frequency rates.
C programs can access kernel services directly by using system calls, or indirectly by
using system library APIs. Solaris man pages provide descriptions for the standard set of
system calls and library routines. It is not possible for user applications to communicate
with hardware devices: all commands can ultimately be traced to system calls and their
underlying basic functions invoked on specific hardware platforms.
The UNIX kernel is divided into four key component: the hardware control component;
the process management component; the file system component; and the system call
component. The hardware control component interfaces with hardware devices, and
implements the low-level operations required to read and write data to these devices. The
file system and process management components sit directly on top of the hardware
control component. The file system component implements all operations required to
support data persistence operations on disks, including raw (/dev/rdsk/) and block
(/dev/dsk/) devices. The process management component supports System V Inter-Process
Communication (IPC), process scheduling and memory management. The system call
component sits directly on top of the file system and process management components,
and provides the interface between the kernel and user applications (like the shell) or
system daemons (like the Internet Super Server, inetd), most often through system library
calls.
Daemons
Daemons are system services, which operate as independent helpers as they are not
built-in to the kernel. They provide high-level interfaces for local and remote users to
access different types of applications running on a system. All networked daemons must
have a port number defined in the services database (/etc/services). In addition, many
daemons are executed through the Internet Super Daemon (inetd), in which case, they are
defined in /etc/inetd.conf.
The following daemons are commonly found on Solaris systems:
The FTP daemon (in.ftpd), which implements a server for the File Transfer Protocol
The Telnet daemon (in.telnetd), which is a standard Telnet server for supporting
interactive logins
The remote shell daemon (in.rshd), which allows a remote user to spawn a shell on
the local system
The remote login daemon (in.rlogind), which allows a remote user to login to the
local system
The remote execution daemon (in.rexecd), which permits remote users to execute
commands on the local system
The talk daemon (in.talkd), which is a real-time chat service
The comsat daemon (in.comsat), which provides the finger service, displaying
whether a user is logged in
The UNIX-to-UNIX Copy Program (UUCP) daemon, which allows compatible
UNIX systems to copy files to each other
The Trivial FTP daemon (in.tftpd) daemon, which supports the booting of diskless
clients from the local server
This list is not meant to be exhaustive for more details, see the /etc/inetd.conf file.
Shell
The shell is the basic CUI for users to interact with the kernel. Although the Bourne shell
(/bin/sh) was the original shell developed by Steve Bourne, there are many new and
improved shells available in Solaris, including the C shell (/bin/csh), the Korn shell
(/bin/ksh) and the Bourne again shell (/bin/bash). Each of these shells offers different
programming and job management facilities. This highlights the key function of shells
although they are used to execute commands and run programs, they are also highly
programmable. This means that expert shell users can increase their productivity by
writing small shell scripts to perform repetitive tasks. Using the shell as a programmable
interface is often so time-saving that many advanced users prefer it over a GUI system like
Gnome.
The shell can be used to execute simple commands like the finger command, which
displays a list of currently logged-in users:
$ finger
Login Name TTY Idle When Where
jbloggs Joe Bloggs *pts/0 21: Wed 07:45 joe.bloggs.com
sbloggs Sue Bloggs *pts/1 11: Wed 19:32 modem1.bloggs.com
dbloggs Dana Bloggs *pts/2 09: Wed 02:56 modem2.bloggs.com
The $ here is the shell prompt you need to enter the command name here, and press
Enter, to execute commands. When executed, the finger command displays the username,
full name, terminal number, idle time, login time and client hostname for each user. Thus,
Dana bloggs logged in on terminal 2 after 2 a.m. on Wednesday from the host modem2,
and has been idle for 9 hours.
The shell contains many operators, such as the pipeline |, which allow filtering to be
performed. For example, to print only the login details for dana, we could pipe the output
from the finger command to the grep command, which is a pattern matching program,
giving the following result:
$ finger | grep dana
dbloggs Dana Bloggs *pts/2 09: Wed 02:56 modem2.bloggs.com
There are no limits to the number of commands that can be chained together in this way.
For example, to redirect and append the output of the finger and grep combination to a file
called /tmp/dana_logins.txt, the following command could be used:
$ finger | grep dana >> /tmp/dana_logins.txt
File system
Solaris supports many different types of file systems, including the following:
UNIX file system (ufs)
System V UNIX file system (s5fs)
MS-DOS file system (pcfs)
High Sierra file system (hsfs)
Zettabyte file system (zfs)
By default, Solaris uses the UNIX file system. The ufs is a hierarchical file system,
allowing directory entries to be created as special files underneath the top-level root
directory, denoted by /. Typically, the following directory entries will appear in the root
directory:
/dev device files
/devices device tree
/etc system configuration files
/home automounted home directories for users
/opt optionally installed applications
/platform kernel files
/tmp temporary file space
/usr installed applications
/var accounting and logging
A ufs file system consists of a boot block, super block and inode blocks. The boot block
(block 0) is used by the system for booting, if the disk drive is bootable. The super block
stores all of the status information for the file system, including the total number of
blocks, the number of blocks set aside for inodes, the file system name, and list of unused
inodes. The inode blocks contain all of the information about files and directories that are
stored on the file system, including user and group ownership, file size, pointers to blocks
and the date on which the file was last accessed or updated.
Getting Help
There are some excellent, and in many cases free, resources that are available to explain
Solaris concepts, terms and commands which you may not understand. The following
sequence will assist you in finding the information you require:
In a Solaris shell, type man command to see the manual page for the command.
In a web browser, connect through to the URL http://docs.Oracle.com/ and search the
Oracle system administrator and reference manuals.
In a web browser, connect through to the URL http://www.google.com/ and search
the archives of the USENET forum comp.unix.solaris. This is particularly useful for
troubleshooting problems which are not contained in the manual or in man pages.
Look at the Sun Managers list archive at http://www.sunmanagers.org
Procedure
1. Obtain Solaris for Intel from Oracle Microsystems (www.Oracle.com). Obtain
software (such as Boot Magic) that will allow Solaris for Intel to be installed on a
PC and dual-booted, if required; or
2. Obtain Solaris on a SPARC system by purchasing an old SPARC system from EBay
(www.ebay.com), and Solaris for SPARC from Oracle Microsystems
(www.Oracle.com). Suitable systems include SPARCstation 10 or 20, Ultra 5 or 10,
or Oracle Blade 100. The latter can be purchased new from Oracle for around
$1,000; or
3. Create a free user account with a public access UNIX system like sdf.lonestar.org.
Learning Outcomes
Upon successful completion of this course, students will be able to:
Perform actions at the OpenBOOT prompt.
Recall all OpenBOOT commands.
Configure devices using OpenBOOT.
Secure hardware using OpenBOOT.
Discuss preconfiguration strategies.
Investigate different hardware devices, types and systems
Install Solaris systems
Add new packages to the system using the pkgadd, pkginfo, pkgchk, and pkgrm
commands.
Install and manage patches by using the patchadd, patchrm, or showrev commands
Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: OpenBoot PROM Monitor.
Read Theme 2: SPARC computer systems.
Read Theme 3: Solaris Installation.
Read Theme 4: Package Management
Read Theme 5: Patch Management
Complete Assignment 2.1: Solaris Key Benefits
Complete Assignment 2.2: Package Installation
Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.
Solaris 11 Installation Guide, freely available from docs.Oracle.com
Oracle4c
SPARCstation 1
Oracle4c
SPARCstation
IPX
Oracle4m
SPARCstation 10
Oracle4m
SPARCstation 20
Oracle4d
SPARCserver
1000
Oracle4d
SPARCcenter
2000
Oracle4u
UltraSPARC 10
Oracle4u
Enterprise 420R
A Oracle4u architecture is required to install and operate Solaris 11.3 successfully. Indeed,
a binary application compiled on a specific kernel architecture can be executed on any
other system with the same architecture. This means that the binary executable does not
need to be recreated when it is exchanged between different systems, which is useful in a
NFS environment, where file systems are shared between hosts.
The following SPARC systems are supported by Solaris:
SPARCclassic
SPARCstation LX
SPARCstation 4
SPARCstation 5
SPARCstation 10
SPARCstation 20
Ultra 1 (including Creator and Creator 3D models)
Enterprise 1
Ultra 2 (including Creator and Creator 3D models)
Ultra 5
Ultra 10
Ultra 30
Ultra 60
Ultra 450
Enterprise 2
Enterprise 150
Enterprise 250
Enterprise 450
Enterprise 3000
Enterprise 3500
Enterprise 4000
Enterprise 4500
Enterprise 5000
Enterprise 5500
Enterprise 6000
Enterprise 10000
SPARCserver 1000
SPARCcenter 2000
If you wish to install a SPARC-based system, then insert the Installation DVD into the
DVD drive, and type the following at the OpenBoot prompt (assuming a run level of 0):
ok boot cdrom
Youll then see output like the following:
Boot device /pci@1f,0/pci@1,1/ide@2/cdrom@2,0:f File and args:
SunOS Release 5.11 Version Generic 64-bit
Copyright 1983-2016 Oracle Microsystems, Inc. All rights reserved.
After analyzing the disk, and creating a swap space partition for virtual memory, the
installer copies a limited version of the operating system to disk (the mini-root), and
reboots. This boot reconfigures the /dev and /device directories, detecting any peripheral
devices that are attached to the system. Once the system is up, then youll be led
through a series of configuration choices, which set the following installation parameters:
Network
Name services
Date and Time
Root password
Power management
Proxy server
Once the system has rebooted again, the Gnome login screen will appear, and you may
login to the system using the root username and the root password selected during the
installation procedure.
To install SPARC on an Intel platform, download the appropriate .iso disk image, and burn
it to a blank DVD. This can be used to boot the system, if installation onto a primary boot
disk is to be performed. However, some users may wish to create a virtual installation
using Oracle VirtualBox. In this case, install VirtualBox first, and then configure a virtual
machine for Solaris, and boot using the DVD.
To display the currently installed patches on your system on Solaris 5.10 and earlier, the
following command can be used:
# showrev p
To install a patch called /tmp/106453-45 onto the system, the following command can be
used:
# patchadd /tmp/106453-45
Sometimes, an installed patch interferes with the operation of a system in an unexpected
way, and must be removed. The following command removes the patch 106453-45 from
the system:
# patchrm 106453-45
Procedure
1. Identify the key technology benefits of using Solaris.
2. Focus on advantages such as multi-user, multi-process and multi-threading
technology.
Procedure
1.
2.
3.
Learning Outcomes
Upon successful completion of this course, students will be able to:
Describe the process of system booting and service configuration
Describe login procedures, such as logging into a system, logging out of a system.
State how to change a password.
Describe how to show which users are currently logged into a system.
Define how to create and modify user accounts and groups using the command-line
tools (useradd, groupadd, usermod, groupmod, userdel, or groupdel commands).
Setup initialization files for user shells.
Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: Run Levels.
Read Theme 2: Startup Files.
Read Theme 3: Monitoring System Access.
Read Theme 4: User Management.
Complete Assignment 3.1: Creating initialization files.
Complete Assignment 3.2: Changing run levels.
Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.
Read the INSTALLING AND UPDATING SOLARIS 11 documents from http://docs.oracle.com/cd/E23824_01/
.
Run levels are also called init states, because the init command can be used to change run
levels. Run levels can also be changed by using a number of specialized commands like
shutdown, which serve the same purpose but may involve some differences to just using
init to change run levels. For example, the shutdown command performs the following
tasks:
The following example shows a shutdown using a 2 minute grace period, as viewed from
the console:
# shutdown -i0 -g120 -y
Using the init command to shutdown is easy. The following command synchronizes disk
data and then shuts down the system and powers it down:
# sync; init 6
In contrast, the following command synchronizes disk data and then reboots the system:
# sync; init 5
To boot the system in single-user mode from the OpenBoot PROM monitor, the following
command can be used:
ok boot s
When a system boots into the normal multi-user state, it starts at run level 0 and works its
way through to run level 3, which is the normal multi-state including NFS file sharing.
Run levels and their actions are ultimately defined by the /etc/inittab file that contains
entries relating to activities conducted when each run level is reached. Entries are
comprised of an identifier, init state and command to be executed, separated by colons.
The following is a sample /etc/inittab file:
ap::sysinit:/sbin/autopush -f /etc/iu.ap
ap::sysinit:/sbin/soconfig -f /etc/sock2path
fs::sysinit:/sbin/rcS sysinit >/dev/msglog 2<>/dev/msglog </dev/console
is:3:initdefault:
p3:s1234:powerfail:/usr/sbin/shutdown -y -i5 -g0 >/dev/msglog 2<>/dev/msglog
sS:s:wait:/sbin/rcS >/dev/msglog 2<>/dev/msglog </dev/console
s0:0:wait:/sbin/rc0 >/dev/msglog 2<>/dev/msglog </dev/console
s1:1:respawn:/sbin/rc1 >/dev/msglog 2<>/dev/msglog </dev/console
s2:23:wait:/sbin/rc2 >/dev/msglog 2<>/dev/msglog </dev/console
s3:3:wait:/sbin/rc3 >/dev/msglog 2<>/dev/msglog </dev/console
s5:5:wait:/sbin/rc5 >/dev/msglog 2<>/dev/msglog </dev/console
s6:6:wait:/sbin/rc6 >/dev/msglog 2<>/dev/msglog </dev/console
fw:0:wait:/sbin/uadmin 2 0 >/dev/msglog 2<>/dev/msglog
</dev/console
of:5:wait:/sbin/uadmin 2 6 >/dev/msglog 2<>/dev/msglog </dev/console
rb:6:wait:/sbin/uadmin 2 1 >/dev/msglog 2<>/dev/msglog </dev/console
sc:234:respawn:/usr/lib/saf/sac -t 300
co:234:respawn:/usr/lib/saf/ttymon -g -h -p `uname -n` console login: -T Oracle -d
/dev/console -l console \
-m ldterm,ttcompat
The SMC is also started by an entry in /etc/inittab.
To manage services, the svcs command is used. Some common tasks include:
svcs a show all currently installed services
svcs p shows how processes are services are related
svcs d show debugging information
svcs l provide a long listing of data about FMRI
Individual services can be managed via their FMRI, including:
svcadm enable enable FMRI
svcadm disable disable FMRI
svcadm refresh re-read the FMRI configuration
svcadm restart start the FMRI again
Alternatively, if youd like extended information about users, then the finger command
may be used:
$ finger
Login Name TTY Idle When Where
jbloggs John Bloggs *pts/0 7:20 Wed 07:27 sydney
ssmith Sue Smith *pts/1 5:43 Wed 07:27 adelaide
kjones Keisha Jones *pts/2 5:43 Wed 07:38 canberra
mdulce Marina Dulce *pts/3 4:43 Wed 07:56 newyork
Here, the full name of each user is displayed along with their client hostname (sydney,
adelaide, canberra and newyork). In security terms, its often useful to verify that users are
logging in from where they are expected: obviously, if mdulce is stationed in New York,
then you wouldnt expect to see a login from Sydney. In this situation, it may be wise to
verify that the account is being accessed by the person authorized.
One way of authenticating logged-in users is to write to them in their logged-in terminal,
requesting an authentication token (like a birthdate), or a pre-recorded token, like what is
you dogs name?. The administrator would type the following to query mdulce:
sydney# write mdulce
Dear Marina,
I notice that you are logged in from Sydney and not New York.
Could you please type your authentication token now, or your session will be terminated in 5 minutes.
Sincerely,
The Administrator (root@sydney)
^d
If the users responds with their birthdate or the pre-arranged token, then their session can
continue, otherwise it may be necessary to terminate their session.
daemon:x:1:1::/:
bin:x:2:2::/usr/bin:
sys:x:3:3::/:
adm:x:4:4:Admin:/var/adm:
lp:x:71:8:Line Printer Admin:/usr/spool/lp:
uucp:x:5:5:uucp Admin:/usr/lib/uucp:
nuucp:x:9:9:uucp Admin:/var/spool/uucppublic:/usr/lib/uucp/uucico
listen:x:37:4:Network Admin:/usr/net/nls:
nobody:x:60001:60001:Nobody:/:
noaccess:x:60002:60002:No Access User:/:
nobody4:x:65534:65534:SunOS 4.x Nobody:/:
Here, the database fields are delimited by the colon character, and represent the following
attributes for the root user in this example:
The username (root)
sysadmin::14:
nobody::60001:
noaccess::60002:
nogroup::65534:
Here, the database fields are again delimited by the colon character, with the following
attributes defined for the group mail:
The group name (mail).
Procedure
1. Shutdown your system using the shutdown command.
2. Boot the system into single user mode.
3. Make a note of the message concerning the root password.
4. Boot your system into multi user mode.
Procedure
1.
2.
3.
4.
5.
Learning Outcomes
Upon successful completion of this course, students will be able to:
State how to find regular expressions in files.
Define how to print or change directory and file permissions.
Explain the role of umask values in setting directory and file permissions.
List the procedures for using access control lists (ACLs).
Explain how to use the process management commands.
Describe the role of signaling in process management.
State the steps required to terminate processes.
Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: Regular Expressions.
Read Theme 2: File Security.
Read Theme 3: Access Control Lists.
Read Theme 4: Processes.
Read Theme 5: Signals.
Complete Assignment 4.1: Monitoring Processes.
Complete Assignment 4.2: File Permissions.
Complete Assignment 4.3: Access Control Lists.
Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.
Here, each line that contains the string Sarah is returned. If we wanted to change the
story, and replace the source string Sarah with the target string Aja, the following sed
command could be used:
$ sed s/Sarah/Aja/g nabby10.txt > nabby11.txt
Here, the regular expression s/Sarah/Aja/g is evaluated by sed against the contents of
nabby10.txt, replacing the source string Sarah with the target string Aja, and redirects
the output to the new file nabby11.txt. Now, if we repeat the grep command on the new
file nabby11.txt, the following output will be displayed:
$ grep Sarah nabby11.txt
Oops! Theres no output because every occurrence of Sarah in nabby11.txt has been
replaced by Aja. To display all of the lines containing Aja, the following command
can be used:
More information regarding sed and regular expressions can be obtained from the sed
FAQ http://sed.sourceforge.net/sedfaq.html
To remove the permissions, the following command could be used:
# chmod g-r database.txt
In terms of user classes, users are designated by u, group members by g, and all other
users by o. File permissions can be set for reading by r, writing by w and execution
by x. Since Solaris does not support file extensions to indicate executable status, all
executable files (including scripts and binaries) must have the executable bit set for the
user class that has permission to execute it.
Lets look at examples of file permissions by using the ls command that lists files. If you
pass the l option to ls, you will be able to display a list of file permissions for the current
directory:
$ ls -l
total 1428
-rwx 1 pwatters phd 712808 Sep 23 2001 test*
-rw- 1 pwatters phd 216 Sep 23 2001 test.cpp
Here, we can see that 1,428 x 512 byte blocks of data are stored in the directory, which
contains two files: an executable file called test, and a C++ source file called test.cpp. In
both cases, the user has read and write permission. However, for the test executable, the
user also has executable permissions. No other users have any permission to access the file
(apart from the super-user).
If we wished members of the group phd to have read access to the files, we could use the
following command to grant it:
$ chmod g+r *
ls -l
total 1428
-rwxr 1 pwatters phd 712808 Sep 23 2001 test*
-rw-r 1 pwatters phd 216 Sep 23 2001 test.cpp
Notice that a new r has been added to the fifth column this indicates group read
permissions. Lets add read permissions for all users, and examine the result:
$ chmod o+r *
ls -l
total 1428
-rwxrr 1 pwatters phd 712808 Sep 23 2001 test*
-rw-rr 1 pwatters phd 216 Sep 23 2001 test.cpp
Note that the eight column now has read permissions indicated for all users. There are ten
columns in total, which represent the following:
A + symbol at the end of the permissions string then signifies that an ACL has been set on
a file as shown in the following ls display:
# ls -l /usr/local/db/database.txt
-rw-+ 1 root sys 6876454 Apr 24 11:43
In this example, the user has two processes running (26923 and 26934), both spawned
from terminal 8, and which have both executed minimal amounts of CPU time. The
applications running are the Cornell shell (tcsh) and the newmail command the latter is
running in the foreground, while the latter is running in the background.
The ps command has many options. For example, to display a list of all processes running
on a system., the ps A command can be used as follows:
$ ps -A
PID TTY TIME CMD
0 ? 0:13 sched
1 ? 0:50 init
2 ? 0:03 pageout
3 ? 250:35 fsflush
562 ? 0:00 sac
345 ? 0:01 xntpd
255 ? 0:00 lockd
62 ? 0:00 sysevent
64 ? 0:00 sysevent
374 ? 0:00 dptelog
511 ? 0:00 keyserv
291 ? 0:11 cron
212 ? 0:00 in.ndpd
336 ? 0:17 utmpd
To display a full listing for all processes, the ps Af command can be used:
$ ps -Af
UID PID PPID C STIME TTY TIME CMD
root 0 0 0 Apr 11 ? 0:13 sched
root 1 0 0 Apr 11 ? 0:50 /etc/init root 2 0 0 Apr 11 ? 0:03 pageout
root 3 0 0 Apr 11 ? 250:35 fsflush
root 562 1 0 Apr 11 ? 0:00 /usr/lib/saf/sac -t 300
root 345 1 0 Apr 11 ? 0:01 /usr/lib/inet/xntpd
root 255 1 0 Apr 11 ? 0:00 /usr/lib/nfs/lockd
root 62 1 0 Apr 11 ? 0:00 /usr/lib/sysevent/syseventd
root 64 1 0 Apr 11 ? 0:00 /usr/lib/sysevent/syseventconfd
root 374 1 0 Apr 11 ? 0:00 /opt/ORACLEWhwrdg/dptelog
root 511 1 0 Apr 11 ? 0:00 /usr/sbin/keyserv
root 291 1 0 Apr 11 ? 0:11 /usr/sbin/cron
root 212 1 0 Apr 11 ? 0:00 /usr/lib/inet/in.ndpd
root 336 1 0 Apr 11 ? 0:17 /usr/lib/utmpd
Here, we can see the command names associated with each of the processes being
Extra columns in the top command include THR (number of threads spawned by a
process), PRI (process priority), NICE (process nice value), SIZE (process size), RES
(amount of application data resident in memory), and STATE (run state or sleep). A
summary of system load data is also provided, including CPU and memory load. To view
the status of all CPUs installed on a system, the psrinfo command can be used:
$ psrinfo
0 on-line since 04/11/02 04:09:58
1 on-line since 04/11/02 04:09:59
2 on-line since 04/11/02 04:09:59
3 on-line since 04/11/02 04:09:59
Code
Action
Description
SIGHUP
Exit
Hangup
SIGINT
Exit
Interrupt
SIGQUIT
Core
Quit
SIGILL
Core
Illegal Instruction
SIGTRAP
Core
Trace or Breakpoint
Trap
SIGABRT
Core
Abort
SIGEMT
Core
Emulation Trap
SIGFPE
Core
Arithmetic Exception
SIGKILL
Exit
Killed
SIGBUS
10
Core
Bus Error
SIGSEGV
11
Core
Segmentation Fault
SIGSYS
12
Core
SIGPIPE
13
Exit
Broken Pipe
SIGALRM 14
Exit
Alarm Clock
SIGTERM 15
Exit
Terminated
Its also possible to manage processes in the shell by using job management. This involves
running one process on the foreground, to which standard input is streamed, and any
number of jobs running in the background. If input needs to be entered on standard input
for an application running in the background, it must be bought into the foreground first.
Lets look at an example. Imagine that you are running the Bourne again shell in the
foreground. You then start an application called firewall which executes in the
foreground, until it is suspended by pressing CTRL+z. When a process is suspended, it
does not continue execution it merely waits to be killed or to be resumed. To resume
execution in the background, the bg command must be used. To bring the application into
the foreground, the fg command must be used. If a number of processes are in the
background, then the job number (which is shown enclosed within square brackets when
the job is sent into the background) must be supplied. The following example shows the
firewall application being started in the foreground, suspended, sent into the background,
another command (ls) being performed in the foreground, and the suspended job being
bought back into the foreground:
$ firewall
^z
Suspended
$ bg
[1] firewall &
$ ls /home/pwatters
database.txt secret.txt
$ fg
Its important to remember that once the shell that spawned the original application has
exited, it is not possible to bring a background job into the foreground.
Procedure
1. Login to your system as root.
2. Use the ps command to identify a set of processes that belong to the root user
using the grep command.
3. Identify a process that can be safely killed.
4. Kill the process by sending a SIGKILL signal and the kill command.
Procedure
1.
2.
3.
4.
Procedure
1.
2.
3.
4.
Learning Outcomes
Upon successful completion of this course, students will be able to:
Describe the purpose of the main system directories (/home, /etc, /opt, /usr, /export,
/).
Describe the available file system types supported by Solaris.
List the options used with the mount command and describe their purpose.
State the purpose of the /etc/mnttab and /etc/vfstab files.
Explain how the removeable media volume manager works with floppy disks and
CD-ROMs.
List the steps required to compress a file.
Describe the purpose of regular files, directories, symbolic links, device files, and
hard links on a Solaris file system.
Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: File System Overview.
Read Theme 2: File Types and Permissions.
Read Theme 3: Mounting File Systems.
Read Theme 4: Volume Manager.
Read Theme 5: File Compression.
Complete Assignment 5.1: File System Features.
Complete Assignment 5.2: Copying Files.
Complete Assignment 5.3: Symbolic Links.
<READINGS>
Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.
Disk Devices
Physical device names typically identify the bus to which a device is attached, its address
and any arguments, while logical device names refer to more specific features. The
address has the form drv@addr:args where drv is a driver name, addr is a device
address, and args are any device arguments. In the case of hard disks, logical device
names include the slices which map to physical disk partitions, although must
administrators refer to slices and their underlying partitions interchangeably. When the
system performs a reconfiguration reboot, the entries in the /devices and /dev directories
are recreated to reflect any changes in the systems hardware, such as a new disk. Entries
in the /etc/minor_perm file determine how file permissions should be applied to any new
filesystems created. For example, the entry sd:* 0666 root wheel specifies that that sd
disk nodes should have the octal permissions 666, owner root and group wheel.
The /etc/path_to_inst file associates each physical device with a logical device on the
system, as the mapping cannot be determined automatically. The following example
shows an entry for the SBUS of a SPARC system and a disk attached to it:
/sbus@1,0 1 sbus
/sbus@1,0/ORACLEW,fas@3,8800000 0 fas
/sbus@1,0/ORACLEW,fas@3,8800000/sd@4,0 34 sd
/sbus@1,0/ORACLEW,fas@3,8800000/sd@0,4 273 sd
/sbus@1,0/ORACLEW,fas@3,8800000/sd@1,5 281 sd
/sbus@1,0/ORACLEW,fas@3,8800000/sd@2,6 289 sd
/sbus@1,0/ORACLEW,fas@3,8800000/sd@3,7 297 sd
/sbus@1,0/ORACLEW,fas@3,8800000/sd@5,1 305 sd
The prtconf command can also be used to display disk device information. For an Ultra 5
system that has an IDE disk installed, the following display shows the details of the disk:
# prtconf
System Configuration: Oracle Microsystems Oracle4u
Memory size: 128 Megabytes
pci, instance #0
pci, instance #0
ide, instance #0
disk
cdrom
dad, instance #0
sd, instance #30
File Systems
Solaris file systems always have two devices defined for communication with
applications: a raw device, stored in the /dev/rdsk directory, that is designed for low-level
operations, and a block device, that is intended high-level operations, including buffered
reading and writing of data. Whether referred to by their raw or block device names, file
systems have four characteristics that are combined to form a file system name:
controller (c)
target (t)
disk (d)
slice (s)
An example file system is /dev/dsk/c0t0d1s5, which can be read as controller 0, target 0,
disk 1, slice 5. By using such a complex nomenclature, a large number of disk controllers
and SCSI buses can be supported. For example, a Oracle Enterprise 450 has 20 SCSI disk
bays supported by multiple controllers. Thus, if one controller breaks down, the other can
be used to immediately take its place, if used within a Redundant Array of Inexpensive
Disks (RAID). RAID technology allows a further abstraction of disk devices, referred to
as meta-disks, that allows large, virtual file systems to be constructed from smaller ones
(striping), or for disks to be made fully redundant with each other (mirroring).
The default file system type for Solaris systems is the UNIX File System (UFS). UFS file
systems have four key components:
a boot block
super blocks
inodes
disk blocks
The boot block of a file system is used to store all data relating to booting a system. If a
file system has a valid boot block, then the operating system may be booted from it. A
system without a boot block on at least one file system cannot be booted from the installed
file systems. However, a boot block can be installed manually if necessary. Super blocks
store key file system data, including the size of the file system, the location of inodes, and
the number of disk blocks available. The inodes store information about the file stored on
the file system, while the disk blocks actually store the data.
In general, a Solaris file system is laid out in the following way:
Slice 0 - / partition
Slice 1 virtual RAM
Slice 2 whole disk
Slice 3 - /export
Slice 4 swap space
Slice 5 - /opt
Slice 6 - /usr
Slice 7 - /export/home
However, if you wanted to use only one slice to store all data, such as Slice 6 for /usr files,
this is acceptable. Also, other partition names, such as /data, could also be used on any
slice. The exception is Slice 2, which shouldnt be used to directly store any file systems,
since it refers to the whole disk.
When expanded to an absolute path, the effective command might look like this,
depending on the current working directory:
$ /home/james/scripts/test.sh
However, if you wanted to execute a script in the parent directory of the current working
directory, the dot-dot notation should be used:
$ ../test.sh
When expanded to an absolute path, the effective command would look like this:
$ /home/james/ test.sh
Using relative paths in this way is very useful in scripts and when working on the
command-line because the absolute path does not need to be entered or even known in
advance.
Symbolic links and hard links are used to create references to files. A hard link is a direct
pointer to a file that is equivalent to the original file, while a symbolic link simply creates
an indirect relationship between the link and the original file. Symbolic links are
commonly used to create references to directories and files that lie on different file
systems. Hard links can only be used within the same file system. An example of using
symbolic links is when a use wants to create a reference to a directory of test data that
resides on a file system with a long directory path, such as
/home/jimmy/data/base/dna/nucleotides it is simply easier for the user tara to create a
symbolic link in her own home directory to this directory, by using the following
command:
$ ln s /home/jimmy/data/base/dna/nucleotides /home/tara/nucleotides
The user tara may now cd to the nucleotides directory and back to her home directory as
parent, since the symbolic link is always relative. Note that the link name can be different
to its referent: thus, the following link would be equally as valid:
$ ln s /home/jimmy/data/base/dna/nucleotides /home/tara/nucs
Performing a ls on this link would show the following entry:
$ ls -l /home/tara/nucs
lrwxrwxrwx 1 tara 1 May 2 11:23 nucs -> /home/jimmy/data/base/dna/nucleotides
Once mounted, the new file system can be referred to in the same way as any other file
system. To ensure that the file system is automatically mounted at boot time, an entry must
be made in the /etc/vfstab file, as shown below:
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
/dev/dsk/c0t0d0s6 /dev/rdsk/c0t0d0s6 /data ufs 2 yes
When a system is shutdown, file systems are automatically unmounted. However, if you
want to perform maintenance on a disk that hosts a file system, then it can be manually
unmounted. The following command unmounts the /data file system:
# umount /data
The mount command makes a number of assumptions about a file system to be mounted,
including the fact that it is a UNIX File System (UFS). To modify these assumptions for
different situations, a number of options can be passed to the mount command, as shown
in the following summary:
bg: continues to attempt mounting in the background in the original attempt fails.
hard: continually sends requests to mount.
intr: permits keyboard interrupts while mounting.
largefiles: enables support for 2G+ file systems.
logging: creates a log of all file system transactions so that they any lost transactions
can be recovered if a file system fails.
noatime: stops timestamping on new files.
remount: allows a soft remount to be performed.
The volcheck command will also mount any CD-ROMs that are in the CD-ROM drive,
making them available through the CDE file manager.
Once youve finished using a mounted floppy, you can use the eject command to eject the
floppy disk:
$ eject
This command would create a compressed file called file.txt.Z, and remove the original
file.txt. To retrieve the files contents, the following command would be used:
$ uncompress file.txt.Z
This command would recreate the file file.txt, and delete file.txt.Z. Using gzip, which
generally gives higher compression ratios than compress, the process is similar:
$ gzip file.txt
This command would create a gzip compressed file called file.txt.gz, and remove the
original file.txt. To retrieve the files contents, the following command would be used:
$ gzip -d file.txt.gz
This command would recreate file.txt, and delete file.txt.gz. When using compression,
keep in mind that the process of compressing and uncompressing data is CPU-intensive,
and for large files, might increase the system load substantially. The overheard increases
when repeat file packing is used this is a strategy designed for achieving optimal
compression ratios, where the compressed file is iteratively compressed, to exploit
redundancies in the compressed version of the file. To perform repeat packing using gzip,
the following command could be used:
$ gzip 9 file.txt
Procedure
1.
2.
3.
4.
Procedure
1. Login to your system as root.
2. Make a copy of three files from the /usr in directory in /tmp, such as /etc/passwd,
/etc/group and /etc/shadow.
3. Check their file sizes using ls.
4. Compress all three files by using compress and then gzip.
5. Check their file sizes using ls again.
6. Save the ls output.
Procedure
1.
2.
3.
4.
5.
Learning Outcomes
Upon successful completion of this course, students will be able to:
Describe the different ways that a system can be booted.
List all of the Solaris run levels.
Explain how to change init states.
Explain how to perform a reconfiguration reboot.
State the relationship between raw and block disk devices.
Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: Booting.
Read Theme 2: Making File Systems.
Read Theme 3: Monitoring File Systems.
Read Theme 4: Repairing File Systems.
Complete Assignment 6.1: Disk Volumes.
Complete Assignment 6.2: System Configuration.
Complete Assignment 6.3: Raw and Block Devices.
Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.
When a new hardware device is added to the system, a reconfiguration boot must be
performed, so that the appropriate physical and logical device files can be created in the
/devices and /dev directories respectively. This can be performed in one of two ways
from the OpenBoot PROM monitor, or from a root shell. Using the OpenBoot method, the
-r option is simply passed at the ok prompt with the boot command:
ok boot r
Alternatively, from a root shell, the following command can be executed:
# sync; touch /reconfigure; init 6
This command will synchronize disk data, and perform a configuration reboot.
This command is equivalent to the following mkfs command:
# mkfs F ufs /dev/rdsk/c0t0d0s5
The newfs command has the following options:
-a q: reserves q blocks to be substituted for bad blocks.
-b q: specifies the size of file system blocks to be q bytes.
-c q: provides q cylinders for individual cylinder groups.
-C q: sets q as peak contiguous disk block count for each file.
-d q: specifies the rotational delay to be q milliseconds.
-f q: specifies the minimum size (q bytes) for an individual file disk fragment.
-i q: sets q bytes aside for each inode.
-m q: sets aside q% of the physical filesystem as a reserve.
-n q: specifies group cylinder rotation number to q.
-r q: specifies peak disk RPM to q.
-s q: specifies the disk size as q sectors.
-t q: sets q tracks aside for each cylinder
The mkfs command can be used to create file systems of the following types:
ufs UNIX file system
Although you can spend countless hours, days and nights monitoring df, looking for
capacities close to 100%, a better method to use is linear estimation to provide an
educated guess as to when the file system will be full. This involves using a spreadsheet to
make a forecast on the basis of data collected each month from df. Imagine if the
following readings had been taken over a period of 11 months:
Month
Capacity
(%)
Jan
Feb
Mar
Apr
16
May
32
Jun
64
Jul
68
Aug
76
Sep
83
Oct
84
Nov
90
Now, with a reading of 90% full, its not clear whether you should take action now to
resolve the space problem, since some early months only saw changes of 1-2%, meaning
that another 5-10 months might be required for the file system to fill up. Alternatively,
some months saw a 32% rise, meaning that only a few days might be left until the file
system has reached capacity. While there is no way to tell the future, by using a linear
prediction model, its possible to predict the capacity of the disk at the end of the next
month. Its also possible to evaluate how well the model fits the previous data by using
the square of the correlation co-efficient (R2).
Figure 1 shows the result of performing a
linear regression on the capacity data to
predict a 99.18% capacity by the end of
the coming month. Future capacity
values can be predicted by the equation
produced from the regression (y=9.7727x
10.924). In addition, with the fit of the
model (R2) equal to 0.9365, more than
93% of the variation in the existing data
can be explained by this equation. This suggests that the model is reliable, and may be
used to predict future capacity values.
What action can an administrator take when a disk volume is approaching capacity? The
following strategies should be investigated:
Using the find command to locate core files and remove them. Core files are memory
dumps that are produced when an application crashes, and contain debugging
information that developers rarely seem to use. Thus, they can usually be deleted (but
check with your developers first).
Use the find command to locate the largest files on the file system. Use the gzip
command to repeat pack and compress them.
Enforce user quotas.
Buy a larger capacity disk and transfer user directories from the original disk.
Use RAID technology (striping) to logically extend the length of the file system.
By using these strategies in combination, disk space can usually be extended in times of
crisis.
Procedure
1.
2.
Procedure
1.
2.
3.
Procedure
1.
2.
3.
Learning Outcomes
Upon successful completion of this course, students will be able to:
State the purpose of the format command and identify how it is used.
Describe the menu selections available for the format command.
Describe how to use the partition option with the format command.
State the role of backup and restore applications.
Describe how to backup a file system to tape.
Describe how to restore a file system from tape.
Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: Format.
Read Theme 2: Partition.
Read Theme 3: Backup.
Read Theme 4: Restore.
Complete Assignment 7.1: Format.
Complete Assignment 7.2: Ufsdump.
Complete Assignment 7.3: Ufsrestore.
Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.
To operate on a specific disk, simply enter the appropriate disk number. If the disk is new,
it will need to be formatted, in which case the format menu command should be selected:
format> format
Ready to format. Formatting cannot be interrupted
and takes 30 minutes (estimated). Continue?
If the disk has previously been formatted, the following message will be displayed:
[disk formatted]
At this point, all of the operations described above can be performed.
Here, we can see that the partition table identifies five user partitions, excluding partition
2, which represents the whole disk.
After files have been successfully backed up to disk, they can be easily restored by using
the ufsrestore command. This command does not automatically write the files to the
absolute location from which they were recorded. Instead, they can be written to a
temporary directory (such as /tmp) and compared with the current copies of disk. The
hierarchical structure of the dump is always preserved thus, if files are recorded from
/usr/local, including /usr/local/games and /usr/local/bin, then if the files are restored to
/tmp, then /tmp/local, /tmp/local/games and /tmp/local/bin will all be preserved.
To restore data from the tape drive /dev/rmt, the following command can be used:
# ufsrestore xf /dev/rmt/0
You have not read any volumes yet.
Unless you know which volume your file(s) are on you should start
with the last volume and work towards the first.
Specify next volume #: 1
set owner/mode for .? [yn] y
This will extract all of the volumes from the first volume recorded on the tape. It is
possible to record multiple volumes on a tape, but to avoid the risk of accidental
overwriting, it is suggested that a separate tape be used for each volume backed up.
If you have a backup volume, but youre not sure what files are located on the tape, then
the following command can be used to displayed a table of contents:
# ufsrestore tf /dev/rmt/0
74333 ./local/bin
34341 ./local/games
108674 ./local
The following commands are supported by ufsrestore when executed in interactive mode
from the command line:
ls: display directory contents
cd: change absolute or relative directory
pwd: display current working directory
add: adds a file to a list of files to be retrieved
delete: removes a file from a list of files to be retrieved
extract: retrieves listed files
setmodes: sets permissions on retrieved files
quit: quits ufsrestore
what: prints tape header information
verbose: switches to verbose mode
help: displays the ufsrestore help screen
Procedure
1. Login to your system as root.
2. Execute the format command.
3. Display a listed of installed disks.
Procedure
1. Login to your system as root.
2. Identify the name of your tape drive device.
3. Use the ufsdump command to backup the /etc to the tape device.
Procedure
1.
2.
3.
4.
Learning Outcomes
Upon successful completion of this course, students will be able to:
Describe how to navigate through a file system using standard shell commands.
State how to use wildcards to relate groups of files.
List the commands used to print directory entries and their file types.
Describe how to create or delete directories.
List the commands required to copy, create, move, or remove files.
Describe how to edit files using the vi editor.
List basic vi commands.
State how to search and replace strings using vi.
Describe how to remotely access a Solaris system
List the commands used in FTP to transfer files between hosts.
Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: Basic Commands.
Read Theme 2: Editor.
Read Theme 3: Remote Access.
Complete Major Assignment
Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.
At this point, a valid HTTP command sequence can be entered, and if the server is
working, then the appropriate data should be returned:
GET index.html
<!DOCTYPE HTML PUBLIC -//IETF//DTD HTML 2.0//EN>
<HTML><HEAD>
<TITLE>Dalek Index Page</TITLE></HEAD>
<h1>This is the dalek operations server</h1>
.
The ftp command is similar to the telnet command, in that they are both TCP clients. The
ftp command creates a connection to a remote FTP server, allowing files to be transferred
as required. The following output shows a sample FTP session:
$ ftp dalek
Connected to dalek.
220 server FTP server (SunOS 5.11) ready.
Name (dalek:davros): davros
331 Password required for davros.
Password:
230 User davros logged in.
ftp>
At this point, users can issue GET or PUT commands in ASCII or BINARY format to
transfer files from and to the server, in ASCII or binary format respectively.
Both Telnet and FTP have been identified as security risks in recent years, since a user
must enter their username and password in order to authenticate themselves. The problem
here is that the username and password are sent in the clear, and can be intercepted by
any other system whose network interface is operating in promiscuous mode. This
provides crackers with the ability to intercept usernames and passwords and use them to
break into your system.
One solution to the Telnet and FTP security problems is to use Secure Shell (SSH) and the
Secure Copy (SCP) programs, since SSH and SCP provide sophisticated mechanisms for
the secure exchange of authentication tokens like usernames and passwords. In this case,
an interactive login can be obtained by using the SSH program, while transferring files can
be achieved by using the SCP program. Both applications make use of cryptography to
effectively hide any data transferred from client to server and server to client. Although
the packets can be intercepted by a third party, their contents will be meaningless, unless
the interceptor has obtained the private key of the user. In addition, a session key is
required to decrypt the data from an individual session. This combination makes it very
unlikely that a cracker would be able to decode data transmitted across a secure link.
Procedure
1. Define the functions that the application will perform (e.g., adding a new
user).
2. Design the application.
3. Create the scripts.
4. Test the scripts.
5. Write a report about the application describing its design, implementation and
testing.
Learning Outcomes
Upon successful completion of this course, students will be able to:
Describe the server types implemented in Solaris.
Describe the client types implemented in Solaris.
State the steps required to install a Solaris server.
Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: Servers.
Read Theme 2: Clients.
Read Theme 3: Server Installation.
Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.
int socket(int domain, int type, int protocol);
where domain is generally PF_INET (for supporting IP), and the socket type can be
a stream (SOCK_STREAM) for TCP, datagram (SOCK_DGRAM) for UDP, or raw
(SOCK_RAW) for use only by the super-user. The protocol number for the TCP/IP
family is 0, so a stream TCP socket can be created by using the following call:
int socket(PF_INET, SOCK_STREAM, 0);
Both a server and client application require a socket to be created in order to
communicate with each other. When a server starts up, it begins by listening on a
specific port number for client requests. For example, the sendmail server listens for
TCP connections on port 25. The ports on which servers listen is mapped by the
services database stored in /etc/services. The following entries show a set of standard
service entries:
ftp 21/tcp
telnet 23/tcp
smtp 25/tcp
whois 43/tcp
domain 53/tcp
domain 53/udp
tftp 69/udp
finger 79/tcp
To locate a server on the network, the client uses the gethostbyname() or
gethostbyaddr() system call to retrieve IP address information for the server. Once the
client can find the server, it can retrieve the port number for a specific service by using the
following call:
getservbyname(service, tcp)
Here, service is the service name to be requested, and tcp is the protocol. Once a
connection has been established between a client and server, with a specific service
request, then requests can be made and responses dispatched as per the protocol
concerned. This generally involves sending a string from the client which is
extracted from standard input on the server. The status of socket connections can be
A clearer relationship between port numbers, server processes and client
connections can be observed by
$ ps -eaf | grep nfsd
root 629 1 0 Feb 27 ? 0:11 /usr/lib/nfs/nfsd -a 16
$ netstat -a | grep nfsd
TCP
Local Address Remote Address Swind Send-Q Rwind Recv-Q State
*.nfsd *.* 0 0 0 0 LISTEN
Here, the NFS server daemon (nfsd) can be seen in both the process list and the
socket list. All servers should have an entry in the process list, unless they have
been spawned by the Internet daemon (inetd), and they should also have a
corresponding socket entry when listening for connections.
The bolded commands are typed by the user joe, who has initiated a client session,
connecting from the client system to send a message to ernie at the remote system server,
both within the domain paulwatters.com. Note that the telnet command can be used to act
as a client here no special client system is required, although many offer better editing
facilities and automation of the mail exchange process. The standard SMTP commands
HELO, MAIL, RCPT, DATA, ., and QUIT are used to communicate the
message data and meta-data to the MTA running on server. After each request is sent by
the client, the server responds with a specific response code, such as 220, 250, 354
and 221. These can be parsed by the client program when it receives a response.
Learning Outcomes
Upon successful completion of this course, students will be able to:
Describe the role of the levels in the OSI stack.
Describe the purpose of the levels in the TCP/IP stack.
List the properties of ethernet networking.
State the commands used to monitor network interface status.
Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: OSI Stack.
Read Theme 2: TCP/IP Stack.
Read Theme 3: Network Interfaces.
Complete Assignment 10.1: Client/Server Benefits.
Complete Assignment 10.2: RPC Services.
Complete Assignment 10.3: Starting/Stopping Services.
Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.
1. Physical Layer
An integral part of the Network Layer is the choice of transmission media. Most modern
networks are composed of ethernet capable of transmitting 10Mbps (10BASE-T) or
100Mbps (100BASE-T). However, more modern networks run ethernet at speeds of
1Gbps (1000BASE-FX)or even 10Gbps. The great advantage of ethernet over earlier
network media is its ability to effectively share a single media for network transmission
between multiple hosts, since it is based on a bus architecture. Ethernet features an
advanced protocol to detect and minimize packet collisions between devices that wish to
transmit data concurrently. Obviously, as network bandwidth and speed increases, the
potential for collisions also grows. Other networking technologies employed at the
Network Layer include the Fiber Distributed Data Interface (FDDI), which is an optic
fiber implementation of a token ring. This technology ensures that there are no collisions,
but available bandwidth does not yet meet ethernet. An alternative architecture is provided
by Asynchronous Transfer Mode (ATM) networks, which are connection-oriented and
suitable for systems which are always on and which must have guaranteed quality of
service, such as video conferencing.
Level 2 of the TCP/IP stack is the Internet Layer. This layer implements the low-level
Internet Protocol (IP) that transport protocols in Level 2 (the Transport Layer) rely on to
manage routing and packet assembly and disassembly. Being one level above the Network
Layer, IP uses IP addresses to identify hosts on the network. These IP addresses can be
mapped 1:1 to MAC addresses. While ARP works only with local area networks, IP
allows data to be exchanged between hosts on different networks. IP networks are divided
into three distinct classes for the purposes of defining sub-networks, or subnets, each
with its own mask, known as the netmask. Three classes of network are supported:
Class A (netmask 255.0.0.0), Class B (netmask 255.255.0.0) and Class C (netmask
255.255.255.0). Within subnets, IP addresses can be allocated manually to hosts, or
dynamically by using the Dynamic Host Configuration Protocol (DHCP). DHCP acts to
conserve the pool of available IP addresses within a subnet by only allowing clients to
lease an address for a certain period of time. When that period expires, if the host is
down, its lease expires and its IP address can be re-allocated to another host.
IP routing allows packets sent from one host to another to reach their destination, since
there may be many intermediate hosts between a client and server. For example, on a local
network, only a single hop is required to transmit packets between a client and server, as
the traceroute command shows:
# traceroute austin
traceroute to austin (10.64.18.1), 30 hops max, 40 byte packets
1 austin (10.64.18.1) 0.675 ms 0.392 ms 0.305 ms
However, to transmit packets across the Internet, many hosts may pass a packet along
from the client until it reaches its server. For example, to connect from a client in Sydney,
Australia, to Oracle Microsystems webserver in the U.S., more hops will be required:
$ traceroute www.Oracle.com
Tracing route to wwwwseast.usec.Oracle.com [192.9.49.30]
over a maximum of 30 hops:
1 184 ms 142 ms 186 ms 202.10.4.131
2 147 ms 288 ms 186 ms 202.10.4.129
Here, we can see that six hops are required to pass a packet from client to server. A second
protocol is supported by the Internet Layer the Internet Control Message Protocol
(ICMP). ICMP allows error messages to be propagated, and allows for higher level
management such as the prevention of congestion. ICMP logically sits on top of IP.
The Transport Layer is Level 2 in the TCP/IP stack. This layer encapsulates all of the
transport protocols, including the Transmission Control Protocol (TCP) and the User
Datagram Protocol (UDP). The former is a connection-oriented protocol that guarantees
packets will be delivered in a specific sequence, while UDP makes few guarantees but has
less overhead.
Level 1 is the Application Layer, which supports most of the commonly used protocols
such as Telnet, FTP, HTTP, NFS and SMTP. Most application developers and end-user
work with protocols that are encapsulated by the Application Layer.
All of the layers are exposed when performing operations like packet sniffing. The
following example shows the data exchanged per-layer for a single packet:
# snoop -v tcp port 23
Using device /dev/hme0 (promiscuous mode)
ETHER: Ether Header
ETHER:
ETHER: Packet 1 arrived at 14:13:22.14
ETHER: Packet size = 60 bytes
ETHER: Destination = 1:58:4:16:8a:34,
ETHER: Source = 2:60:5:12:6b:35, Oracle
ETHER: Ethertype = 0800 (IP)
ETHER:
IP: IP Header
IP:
IP: Version = 4
IP: Header length = 20 bytes
IP: Type of service = 0x00
IP: xxx. . = 0 (precedence)
IP: 0 . = normal delay
IP: . 0 = normal throughput
IP: . .0.. = normal reliability
IP: Total length = 40 bytes
IP: Identification = 46864
IP: Flags = 0x4
IP: .1.. . = do not fragment
IP: ..0. . = last fragment
IP: Fragment offset = 0 bytes
IP: Time to live = 255 seconds/hops
IP: Protocol = 6 (TCP)
IP: Header checksum = 11a9
IP: Source address = 64.23.168.76, moppet.paulwatters.com
IP: Destination address = 64.23.168.48, miki.paulwatters.com
IP: No options
IP:
TCP: TCP Header
TCP:
TCP: Source port = 62421
TCP: Destination port = 23 (TELNET)
Here, we can see the entries for TELNET (level 1), TCP (level 2), IP (level 3) and ETHER
(level 4).
4. Application Layer
3. Transport Layer
2. Internet Layer
1. Network Layer
This output shows all of the parameters for the network interface as configured. The
interface is up, meaning that is accepting connections. To display a list of the device
stream, showing the different layers, the following command can be used:
# ifconfig hme0 modlist
0 arp
1 ip
2 hme
If a network interface has not been logically configured to work with the system, it can be
manually plumbed by using the following command:
# ifconfig hme0 plumb
To remove the logical configuration, the interface can be unplumbed:
# ifconfig hme0 unplumb
This will prevent connections from being accepted. The configuration will be reported as
follows:
# ifconfig hme0
hme0:flags=1000843<DOWN,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500
index 2
inet 10.64.18.3 netmask ffffff00 broadcast 10.64.18.255
ether 8:0:20:c6:a5:72
Of course, you should never unplumb an interface from a remote terminal! To bring the
interface back up, the following command could be used:
# ifconfig hme0 up
Procedure
1. Review the reading assignments for the first module.
2. Read the latest material on Solaris server benefits from the Oracle home page
(www.Oracle.com).
3. List the key benefits of Solaris in a client/server context.
4. Write a 1,000 word paper summarizing these benefits.
Procedure
1.
2.
3.
4.
Learning Outcomes
Upon successful completion of this course, students will be able to:
Describe the different functions of syslog.
Describe the syntax of the syslog.conf configuration file.
Interpret a syslog file containing different classes and event types.
Create syslog entries from the command-line.
Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: Syslog daemon.
Read Theme 2: Syslog configuration.
Read Theme 3: Using syslog.
Read Theme 4: Syslog and the command-line.
Complete Assignment 11.1: Monitoring syslog.
Complete Assignment 11.2: Modifying syslog.conf.
Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.
Alternatively, if all emergency events are directed into the file
/var/adm/messages.alert, then another administrator could continuously monitor
new entries by using the following command:
# tail f /var/adm/messages.alert
Priorities are associated with messages created by different facilities which are
identified by a different set of codes:
AUTH authentication messages
CRON scheduling daemon messages
Lets example a sample segment from the default logfile /var/adm/messages:
$ cat /var/adm/messages
Apr 17 20:34:37 ivana genunix: [ID 540533 kern.notice] SunOS Release 5.11 Version Generic 64-bit
Apr 17 20:34:37 ivana genunix: [ID 784649 kern.notice] Copyright 1983-2000 Oracle Microsystems, Inc. All
rights reserved.
Apr 17 20:34:37 ivana genunix: [ID 678236 kern.info] Ethernet address = 8:0:20:c6:a5:72
Apr 17 20:34:37 ivana unix: [ID 389951 kern.info] mem = 131072K (0x8000000)
Apr 17 20:34:37 ivana unix: [ID 930857 kern.info] avail mem = 122445824
Apr 17 20:34:37 ivana rootnex: [ID 466748 kern.info] root nexus = Oracle Ultra 5/10 UPA/PCI (UltraSPARCIIi 360MHz)
Apr 17 20:34:37 ivana rootnex: [ID 349649 kern.info] pcipsy0 at root: UPA 0x1f 0x0
Apr 17 20:34:37 ivana genunix: [ID 936769 kern.info] pcipsy0 is /pci@1f,0
Apr 17 20:34:37 ivana pcipsy: [ID 370704 kern.info] PCI-device: pci@1,1, simba0
Apr 17 20:34:37 ivana genunix: [ID 936769 kern.info] simba0 is /pci@1f,0/pci@1,1
Apr 17 20:34:37 ivana pcipsy: [ID 370704 kern.info] PCI-device: pci@1, simba1
Apr 17 20:34:37 ivana genunix: [ID 936769 kern.info] simba1 is /pci@1f,0/pci@1
These entries show the entries created during boot time for an Ultra 5 system. Only
NOTICE and INFO messages are shown for the kernel (KERN). Each message
comprises a timestamp, hostname (ivana), a unique ID for each message, and the
facility and priority number separated by a period (such as kern.info for the KERN
facility at the INFO level), and the message. For example, the message:
Apr 17 20:34:37 ivana unix: [ID 930857 kern.info] avail mem = 122445824
shows that on April 17th are 8:34 pm, on system ivana, the unix kernel logged an
information message that 122445824 bytes of RAM was available (116M free). If a
kernel module generates the message, then its name will be printed instead of
unix after the hostname. Some examples included in this output include the
modules genunix, rootnex, and pcipsy. Identifying modules that cause errors can
assist in debugging system crashes and unexpected system activity, particularly
during booting.
This file specifies that all ERR, KERN.NOTICE and AUTH.NOTICE messages
should be redirected to the console, and any other devices specified by the
/dev/sysmsg device. Note that the wildcard character (*) is used to specify all ERR
level messages. Multiple actions can be associated with each facility level. For
example, the second line indicates that all ERR level messages should be written to
the /var/adm/messages file, as well as being written to the console as specified by
the first line. The third line specifies that all ALERT messages should be sent to the
user pwatters, and the next states that all INFO messages should be sent to the root
user. Finally, all EMERG messages should be broadcast to all users.
A more advanced syslog.conf file looks like this:
$ cat /etc/syslog.conf
*.notice /var/log/notice
*.info /var/log/info
*.crit /var/log/crit
*.err /var/log/err
Here, we can see that the NOTICE, INFO, CRIT and ERR messages are being
redirected to their own log files. This allows easy access to different facility level
messages without having to use the grep command, which can be time saving when
filtering large files.
However, since most administrators do not monitor these files 24 hours per day, an
automated approach to extracting pertinent messages must be devised. The
following script shows how to use the date, cut and grep commands to extract all
messages for a particular string, recorded today:
$ cat filter_syslog.sh
#!/bin/sh
# filter_syslog.sh
# Takes parameter $1 as a string to be searched for in /var/adm/messages
# for the current date
DATE=`date | cut -f2,3 -d `; export DATE
grep $DATE /var/adm/messages | grep $1
The script works by reading a date stamp from the system, and extracting todays
month and day using cut (columns 2 and 3 of the output of the date command). This
exported to an environment variable called $DATE. The next command then
searches the /var/adm/messages file for entries containing the day and month
contained in $DATE, and then filters the entries further for the string supplied on
the command-line. To use the script, you need to supply a string to search for on the
command-line. For example, to search for all entries containing mail.alert, the
following command could be used:
$ filter_syslog.sh mail.alert
Jun 10 08:52:56 ivana sendmail[213]: [ID 702911 mail.alert] unable to qualify my own domain name (ivana)
using short name
Here, we can see only one mail.alert entry relating to a name service problem. If this
script was run once every day, the administrator would automatically gain a list of
issues to be resolved.
This would result in the following entry being inserted into the /var/adm/messages
file:
Jun 10 10:16:44 ivana pwatters: [ID 702911 daemon.crit] **** INTRUDER DETECTED on pts/3
Here, we can see that the event has been recorded with the daemon.crit level. By
using the filter_syslog.sh script, an administrator could check to see whether new
entries have been added each day or every hour.
Procedure
1. Login to your system as root.
2. Use the grep command to create a list of all telnet sessions logged in the system log.
Procedure
1.
2.
3.
4.
Learning Outcomes
Upon successful completion of this course, students will be able to:
State the steps required to create, check, and mount file systems.
Describe the differences between physical disk devices and disk metadevices.
List the steps required to create disk volumes using Solaris Volume Manager.
State the properties of a pseudo file system.
Describe the commands used to operate on the /proc file system.
State the steps required to add virtual memory to the system.
Complete the assigned Readings, following the suggested order outlined in this path.
Read Theme 1: File systems.
Read Theme 2: Volume management.
Read Theme 3: Using Volume Manager.
Read Theme 4: The proc file system.
Read Theme 5: Virtual memory.
Complete Assignment 12.1: Using fsck.
Complete Assignment 12.2: Creating virtual memory.
Readings
You may wish to complete the readings for this module in the order suggested in the Path
to Complete the Module.
Here, we can see a list of the disk blocks where a backup of the super-block is created.
Thus, if the super-block is corrupted, it can be read from another block. The file system
created was 1725.1M in size, occupying 3533040 sectors in 3505 cylinders of 16 tracks.
For reference, the newfs command also displays the equivalent parameters that could be
used to create the file system by using the mkfs command:
# mkfs -F ufs -o N /dev/rdsk/c0t0d0s0 3533040 63 16 8192 1024 32 3 90 4096 t 0 1 8 16
Once a file system has been created, logging should be enabled to ensure that the file
system can be recovered in the system crashes. If logging is not enabled in the /etc/vfstab
file for each volume, then file systems can be recovered by using the fsck command.
However, since the fsck command is usually run at boot time, this can significantly extend
the amount of time required for booting. Note that fsck should never be used on a mounted
file system.
To mount a file system, the mount command is used. A mount point must be created for a
file system before it is mounted. The following command sequence creates a mount point
/data, and then mounts a UFS file system c0t0d0s5 on /data:
# mkdir /data
# mount /dev/dsk/c0t0d0s5 /data
Only the super-user can mount file systems directly. To check which file systems have
already been mounted, the mount command can be used without any options:
# /sbin/mount
/ on /dev/dsk/c0t0d0s0 read/write/setuid/intr/largefiles/onerror=panic/dev=2200000 on Mon Jun 10 08:51:25 2002
/proc on /proc read/write/setuid/dev=31c0000 on Mon Jun 10 08:51:24 2002
/dev/fd on fd read/write/setuid/dev=3280000 on Mon Jun 10 08:51:26 2002
/etc/mnttab on mnttab read/write/setuid/dev=3380000 on Mon Jun 10 08:51:28 2002
/var/run on swap read/write/setuid/dev=1 on Mon Jun 10 08:51:28 2002
/tmp on swap read/write/setuid/dev=2 on Mon Jun 10 08:51:30 2002
/export/home on /dev/dsk/c0t0d0s7 read/write/setuid/intr/largefiles/onerror=panic/dev=2200007 on Mon Jun 10
08:51:30 2002
Note that the state databases are now replicated across two different controllers (c1 and
c0) maximizing redundancy.
Here, the two partitions c1t0d0s5 and c2t0d0s5, running on separate controllers, can be
combined to form the virtual disk d1. To initialize the d1 volume, and mount it on /data,
the following command sequence can be used:
# metainit d1
# newfs /dev/md/rdsk/d1
# mkdir /data
# mount /dev/md/dsk/d1 /data
Alternatively, if you wanted to create a mirrored virtual file system called d2, by writing to
both c2t0d0s5 and c1t0d0s5 concurrently, the following definitions would need to be
entered into md.tab:
d2 m /dev/md/dsk/d3 /dev/md/dsk/d4
d3 1 1 /dev/dsk/c1t0d0s5
d4 1 1 /dev/dsk/c2t0d0s5
While d2 is the virtual disk device for the mirrored device, each individual disk must also
have a virtual counterpart (d3 maps to /dev/dsk/c1t0d0s5 and d4 maps to
/dev/dsk/c2t0d0s5). To initialize the mirrored file system to operate as /oracle, the
following command sequence would be used:
# metainit d2
# metainit d3
# metainit d4
# newfs d2
# newfs d3
# newfs d4
# mkdir /oracle
# mount /dev/md/dsk/d2 /oracle
There are several commands that can be used to make sense of this data, including:
pflags prints tracing flags
pcred displays process credentials
pmap prints the address space map
pldd displays a list of libraries being used
psig prints current process signals
pstack displays a stack trace
pfiles lists a set of open file details
pwdx displays the current working directory
ptree prints a process tree
To mount the file as virtual memory, the following command can be used:
# swap a /swap
To report on current available virtual memory, the following command can be used:
# swap -l
swapfile dev swaplo blocks free
/dev/dsk/c0t0d0s1 136,1 16 1049312 1049312
/swap - 16 2032 2032
This output shows the number of free and used blocks for all virtual memory devices.
Procedure
1. Login to your system as root.
2. Use fsck to check at least one filesystem
Procedure
1.
2.
3.
Learning Outcomes
Upon successful completion of this course, students will be able to:
Describe how to use the ps command for process monitoring.
State the key properties of processes and threads in a multiprocess, multithreaded
system
Identify the role of interrupt levels and the lockstat program
State the key properties of real-time scheduling and scheduling classes
Describe the role of processor sets
Identify the steps required to monitor CPU activity
Describe how to use the Solaris Resource Manager
Readings
Oracle Solaris Administration: Common Tasks, Chapter 20
(https://docs.oracle.com/cd/E23824_01/html/821-1451/docinfo.html#scrolltoc)
In this example, the user has two processes running (26923 and 26934), both spawned
from terminal 8, and which have both executed minimal amounts of CPU time. The
applications running are the Cornell shell (tcsh) and the newmail command the latter is
running in the foreground, while the latter is running in the background.
The ps command has many options. For example, to display a list of all processes running
on a system., the ps A command can be used as follows:
$ ps -A
PID TTY TIME CMD
0 ? 0:13 sched
1 ? 0:50 init
2 ? 0:03 pageout
3 ? 250:35 fsflush
562 ? 0:00 sac
345 ? 0:01 xntpd
255 ? 0:00 lockd
62 ? 0:00 sysevent
64 ? 0:00 sysevent
374 ? 0:00 dptelog
511 ? 0:00 keyserv
291 ? 0:11 cron
212 ? 0:00 in.ndpd
336 ? 0:17 utmpd
To display a full listing for all processes, the ps Af command can be used:
$ ps -Af
UID PID PPID C STIME TTY TIME CMD
root 0 0 0 Apr 11 ? 0:13 sched
root 1 0 0 Apr 11 ? 0:50 /etc/init root 2 0 0 Apr 11 ? 0:03 pageout
root 3 0 0 Apr 11 ? 250:35 fsflush
root 562 1 0 Apr 11 ? 0:00 /usr/lib/saf/sac -t 300
root 345 1 0 Apr 11 ? 0:01 /usr/lib/inet/xntpd
root 255 1 0 Apr 11 ? 0:00 /usr/lib/nfs/lockd
root 62 1 0 Apr 11 ? 0:00 /usr/lib/syseventd
root 64 1 0 Apr 11 ? 0:00 /usr/lib/syseventconfd
root 374 1 0 Apr 11 ? 0:00 /opt/ORACLEWhwrdg/dptelog
root 511 1 0 Apr 11 ? 0:00 /usr/sbin/keyserv
root 291 1 0 Apr 11 ? 0:11 /usr/sbin/cron
root 212 1 0 Apr 11 ? 0:00 /usr/lib/inet/in.ndpd
root 336 1 0 Apr 11 ? 0:17 /usr/lib/utmpd
Here, we can see the command names associated with each of the processes being
executed.
In this example, a question mark ? in the TTY column indicates that the process is not bound to any specific terminal.
The ps command displays the process list in scheduler format as shown below:
# ps -c
PID CLS PRI TTY TIME CMD
290 TS 40 pts/2 0:00 sh
295 TS 48 pts/2 0:00 bash
299 TS 58 pts/2 0:00 ps
In this example, a priority and priority class value are displayed. The class can be one of
the following:
SYS the System Class
TS the Time Sharing class, with a configured user priority range of -60 through 60
IA - the Interactive Class with a configured user priority range of -60 through 60
The long format of the command displays even more process characteristics related to
scheduling:
# ps -clf
F S UID PID PPID CLS PRI ADDR SZ WCHAN STIME TTY TIME CMD
The format here reflects scheduler properties specified by priocntl, which is a command
that prints or sets real-time scheduling parameters for processes. You can retrieve a list of
all classes supported by the system by using the following command:
# priocntl -l
CONFIGURED CLASSES
==================
SYS (System Class)
TS (Time Sharing)
Configured TS User Priority Range: -60 through 60
IA (Interactive)
Configured IA User Priority Range: -60 through 60
The System Class, or real time class, allows processes to be run with absolute priority on a
system, without regard to the requirements of other processes. This is very useful where a
real time controller or some other external device type needs to be supported with
temporal resolution for data collection or control. In this instance, the system acts more
like a single user system. However, its more usual for UNIX processes to be time-sharing,
since this is the basis of a multi-user system, given algorithms that ensure that fair
distribution of CPU time amongst a set of processes competing for a scarce resource.
Key columns in the top command include THR (number of threads spawned by a process),
PRI (process priority), NICE (process nice value), SIZE (process size), RES (amount of
application data resident in memory), and STATE (run state or sleep). A summary of
system load data is also provided, including CPU and memory load.
Load averages are also provided as part of the w command:
$ w
3:40pm up 31 day(s), 22:37, 41 users, load average: 1.42, 2.01, 2.15
User tty login@ idle JCPU PCPU what
jones pts/1 7:56am 24 1:26 24 /bin/ispell -a -m -B
To virew the status of all CPUs installed on a system, the psrinfo command can be used:
$ psrinfo
0 on-line since 04/11/02 04:09:58
1 on-line since 04/11/02 04:09:59
2 on-line since 04/11/02 04:09:59
3 on-line since 04/11/02 04:09:59
Procedure
1.
2.
3.
4.
5.
Procedure
1.
2.
3.
4.
5.
[1] Note that this administration guide is now six years out-of-date, and only current to Solaris 10
[2] Note that you may have to install the Volume Manager package manually using the command: pkg install
storage/svm