You are on page 1of 4

3.

0 Input Output Process

In computing, input/output, or I/O, refers to the communication between an information


processing system (such as a computer), and the outside world possibly a human, or
another information processing system. Inputs are the signals or data received by the
system, and outputs are the signals or data sent from it. The term can also be used as part
of an action; to "perform I/O" is to perform an input or output operation. I/O devices are
used by a person (or other system) to communicate with a computer. For instance, a
keyboard or a mouse may be an input device for a computer, while monitors and printers
are considered output devices for a computer. Devices for communication between
computers, such as modems and network cards, typically serve for both input and output.

Note that the designation of a device as either input or output depends on the perspective.
Mice and keyboards take as input physical movement that the human user outputs and
convert it into signals that a computer can understand. The output from these devices is
input for the computer. Similarly, printers and monitors take as input signals that a
computer outputs. They then convert these signals into representations that human users
can see or read. (For a human user the process of reading or seeing these representations
is receiving input.)

In computer architecture, the combination of the CPU and main memory (i.e. memory
that the CPU can read and write to directly, with individual instructions) is considered the
brain of a computer, and from that point of view any transfer of information from or to
that combination, for example to or from a disk drive, is considered I/O. The CPU and its
supporting circuitry provide memory-mapped I/O that is used in low-level computer
programming in the implementation of device drivers. An I/O algorithm is one designed
to exploit locality and perform efficiently when data reside on secondary storage, such as
a disk drive.

3.1 Interface of an input output process

I/O Interface is required whenever the I/O device is driven by the processor. The
interface must have necessary logic to interpret the device address generated by the
processor. Handshaking should be implemented by the interface using appropriate
commands like (BUSY,READY,WAIT), and the processor can communicate with I/O
device through the interface. If different data formats are being exchanged, the interface
must be able to convert serial data to parallel form and vice-versa. There must be
provision for generating interrupts and the corresponding type numbers for further
processing by the processor if required

A computer that uses memory-mapped I/O accesses hardware by reading and writing to
specific memory locations, using the same assembler language instructions that computer
would normally use to access memory.

1
3.2 Memory-mapped I/O and Port-mapped I/O

Memory-mapped I/O (MMIO) and port I/O (also called port-mapped I/O or PMIO)
are two complementary methods of performing input/output between the CPU and
peripheral devices in a computer. Another method, not discussed in this article, is using
dedicated I/O processors—commonly known as channels on mainframe computers—that
execute their own instructions.

Memory-mapped I/O (not to be confused with memory-mapped file I/O) uses the same
address bus to address both memory and I/O devices, and the CPU instructions used to
access the memory are also used for accessing devices. In order to accommodate the I/O
devices, areas of CPU's addressable space must be reserved for I/O. The reservation
might be temporary—the Commodore 64 could bank switch between its I/O devices and
regular memory—or permanent. Each I/O device monitors the CPU's address bus and
responds to any CPU's access of device-assigned address space, connecting the data bus
to a desirable device's hardware register.

Port-mapped I/O uses a special class of CPU instructions specifically for performing I/O.
This is generally found on Intel microprocessors, specifically the IN and OUT
instructions which can read and write a single byte to an I/O device. I/O devices have a
separate address space from general memory, either accomplished by an extra "I/O" pin
on the CPU's physical interface, or an entire bus dedicated to I/O.

A device's direct memory access (DMA) is not affected by those CPU-to-device


communication methods, especially it is not affected by memory mapping. This is
because by definition, DMA is a memory-to-device communication method that bypasses
the CPU.

Hardware interrupt is yet another communication method between CPU and peripheral
devices. However, it is always treated separately for a number of reasons. It is device-
initiated, as opposed to the methods mentioned above, which are CPU-initiated. It is also
unidirectional, as information flows only from device to CPU. Lastly, each interrupt line
carries only one bit of information with a fixed meaning, namely "an event that requires
attention has occurred in a device on this interrupt line".

3.2.1 Relative merits of the two I/O methods

The main advantage of using port-mapped I/O is on CPUs with a limited addressing
capability. Because port-mapped I/O separates I/O access from memory access, the full
address space can be used for memory. It is also obvious to a person reading an assembly
language program listing (or even, in rare instances, analyzing machine language) when
I/O is being performed, due to the special instructions that can only be used for that
purpose.

2
I/O operations can slow the memory access, if the address and data buses are shared. This
is because the peripheral device is usually much slower than main memory. In some
architectures, port-mapped I/O operates via a dedicated I/O bus, alleviating the problem.

There are two major advantages of using memory-mapped I/O. One of them is that, by
discarding the extra complexity that port I/O brings, a CPU requires less internal logic
and is thus cheaper, faster, easier to build, consumes less power and can be physically
smaller; this follows the basic tenets of reduced instruction set computing, and is also
advantageous in embedded systems. The other advantage is that, because regular memory
instructions are used to address devices, all of the CPU's addressing modes are available
for the I/O as well as the memory, and instructions that perform an ALU operation
directly on a memory operand--loading an operand from a memory location, storing the
result to a memory location, or both--can be used with I/O device registers as well. In
contrast, port-mapped I/O instructions are often very limited, often providing only for
plain load and store operations between CPU registers and I/O ports, so that, for example,
to add a constant to a port-mapped device register would require three instructions: read
the port to a CPU register, add the constant to the CPU register, and write the result back
to the port.

As 16-bit processors have become obsolete and replaced with 32-bit and 64-bit in general
use, reserving ranges of memory address space for I/O is less of a problem, as the
memory address space of the processor is usually much larger than the required space for
all memory and I/O devices in a system. Therefore, it has become more frequently
practical to take advantage of the benefits of memory-mapped I/O. However, even with
address space being no longer a major concern, neither I/O mapping method is
universally superior to the other, and there will be cases where using port-mapped I/O is
still preferable.

3.2.2 Example

Consider a simple system built around an 8-bit microprocessor. Such a CPU might
provide 16-bit address lines, allowing it to address up to 64 kibibytes (KiB) of memory.
On such a system, perhaps the first 32 KiB of address space would be allotted to random
access memory (RAM), another 16K to read only memory (ROM) and the remainder to a
variety of other devices such as timers, counters, video display chips, sound generating
devices, and so forth. The hardware of the system is arranged so that devices on the
address bus will only respond to particular addresses which are intended for them; all
other addresses are ignored. This is the job of the address decoding circuitry, and it is this
that establishes the memory map of the system.

3.3 IO scheduling

Input / Output Scheduling or I/O Scheduling is a term used to describe the method
computer operating systems decide the order that block I/O operations will be submitted
to storage volumes. I/O Scheduling is sometimes called 'disk scheduling'.

3
3.3.1 Purpose

I/O schedulers can have many purposes depending on the goal of the I/O scheduler, some
common goals are:

• To minimize time wasted by hard disk seeks.


• To prioritize a certain processes' I/O requests.
• To give a share of the disk bandwidth to each running process.
• To guarantee that certain requests will be issued before a particular deadline.

3.3.2 Implementation

I/O Scheduling usually has to work with hard disks which share the property that there is
long access time for requests which are far away from the current position of the disk
head (this operation is called a seek). To minimise the effect this has on system
performance, most I/O schedulers implement a variant of the elevator algorithm which
re-orders the incoming randomly ordered requests into the order in which they will be
found on the disk.

3.3.3 Common disk scheduling disciplines

• Random Scheduling (RSS)


• First In, First Out (FIFO), also known as First Come First Served (FCFS)
• Last In, First Out (LIFO)
• Shortest seek first, also known as Shortest Seek / Service Time First (SSTF)
• Elevator algorithm, also known as SCAN (including its variants, C-SCAN,
LOOK, and C-LOOK)
• N-Step-SCAN SCAN of N records at a time
• FSCAN, N-Step-SCAN where N equals queue size at start of the SCAN cycle.
• Completely Fair Queuing (Linux)
• Anticipatory scheduling
• Noop scheduler
• Deadline scheduler

You might also like