system as large and complex as a modern operating system must be engineered
carefully if it is to function properly and be modified easily.
common approach is to partition the task into small components rather than have
one monolithic system.
- Each of these
modules should be a well-defined portion of the system, with carefully defined
inputs, outputs, and functions.
1.8.1 Simple Structure
commercial operating systems do not have well-defined structures.
such systems started as small, simple, and limited systems and then grew beyond
their original scope.
is an example of such a system.
was originally designed and implemented by a few people who had no idea that it
would become so popular.
- It was written to
provide the most functionality in the least space, so it was not divided into
- In MS-DOS, the
interfaces and levels of functionality are not well separated.
instance, application programs are able to access the basic I/O routines to
write directly to the display and disk drives.
- Such freedom
leaves MS-DOS vulnerable to errant (or malicious) programs, causing entire
system crashes when user programs fail.
course, MS-DOS was also limited by the hardware of its era.
- Because the Intel
8088 for which it was written provides no dual mode and no hardware protection,
the designers of MS-DOS had no choice but to leave the base hardware
- Another example of
limited structuring is the original UNIX operating system.
- Like MS-DOS, UNIX
initially was limited by hardware functionality.
- It consists of two
separate parts: the kernel and the system programs.
Kernel is further separated into a series of interfaces and device drivers,
which have been added and expanded over the years as UNIX has evolved.
- Everything below
the system-call interface and above the physical hardware is the kernel.
- The kernel
provides the file system, CPU scheduling, Memory management, and other
operating-system functions through system calls.
- Taken in sum, that
is an enormous amount of functionality to be combined into one level.
- This monolithic
structure was difficult to implement and maintain.
Figure 12. MS-DOS layer structure
1.8.2 Layered Approach
- With proper
hardware support, operating systems can be broken into pieces that are smaller
and more appropriate than those allowed by the original MS-DOS and UNIX
operating system can then retain much greater control over the computer and
over the applications that make use of that computer.
have more freedom in changing the inner workin.gs of the system and in creating
modular operating systems.
a topdown approach, the overall functionality and features are determined and
are separated into components.
- Information hiding
is also important, because it leaves programmers free to implement the
low-level routines as they see fit, provided that the external interface of the
routine stays unchanged and that the routine itself performs the advertised
- A system can be
made modular in many ways.
- One method is the
layered approach, in which the operating system is broken into a number of
- The bottom layer
(layer 0) is the hardware; the highest layer (layer N) is the user interface.
operating-system layer is an implementation of an abstract object made up of
data and the operations that can manipulate those data.
typical operating-system layer-say, layer M -consists of data structures
and a set of routines that can be invoked by higher-level layers.
M, in turn, can invoke operations on lower-level layers.
main advantage of the layered approach is simplicity of construction and
layers are selected so that each uses functions (operations) and services of
only lower-level layers.
- This approach
simplifies debugging and system verification.
Figure 13. Traditional UNIX System Structure
first layer can be debugged without any concern for the rest of the system, because,
by definition, it uses only the basic hardware (which is assumed correct) to
implement its functions.
the first layer is debugged, its correct functioning can be assumed while the
second layer is debugged, and so on.
- If an error is
found during the debugging of a particular layer, the error must be on that
layer, because the layers below it are already debugged.
- Thus, the design
and implementation of the system are simplified.
layer is implemented with only those operations provided by lower level layers.
layer does not need to know how these operations are implemented; it needs to
know only what these operations do.
each layer hides the existence of certain data structures, operations, and
hardware from higher-level layers.
major difficulty with the layered approach involves appropriately defining the
a layer can use only lower-level layers, careful planning is necessary.
- For example, the
device driver for the backing store (disk space used by virtual-memory
algorithms) must be at a lower level than the memory-management routines,
because memory management requires the ability to use the backing store.
requirements may not be so obvious. The backing-store driver would normally be
above the CPU scheduler, because the driver may need to wait for I/0 and the
CPU can be rescheduled during this time.
on a large system, the CPU scheduler may have more information about all the
active processes than can fit in memory.
- Therefore, this information
may need to be swapped in and out of memory, requiring the backing store driver
routine to be below the CPU scheduler.
Figure 14. A layered Operating System
final problem with layered implementations is that they tend to be less efficient
than other types.
instance, when a user program executes an I/0 operation, it executes a system
call that is trapped to the I/0 layer, which calls the memory-management
layer, which in turn calls the
CPU-scheduling layer, which is then passed to the hardware.
each layer, the parameters may be modified; data may need to be passed, and so
- Each layer adds
overhead to the system call; the net result is a system call that takes longer
than does one on a non layered system.
limitations have caused a small backlash against layering in recent years.
- Fewer layers with
more functionality are being designed, providing most of the advantages of
modularized code while avoiding the difficult problems of layer definition and interaction.
kernel is large and difficult to manage.
- In the mid-1980s,
researchers at Carnegie Mellon University developed an operating system called
Mach that modularized the kernel using the microkernel approach.
- This method structures
the operating system by removing all non essential components from the kernel
and implementing them as system and user-level programs.
- The result is a
- There is little
consensus regarding which services should remain in the kernel and which should
be implemented in user space.
however, microkernels provide minimal process and memory management, in
addition to a communication facility.
main function of the micro kernel is to provide a communication facility
between the client program and the various services that are also running in
is provided by message passing.
example, if the client program wishes to access a file, it must interact with
the file server.
- The client program
and service never interact directly. Rather, they communicate indirectly by
exchanging messages with the microkernel.
benefit of the microkernel approach is ease of extending the operating system.
new services are added to user space and consequently do not require
modification of the kernel.
the kernel does have to be modified, the changes tend to be fewer, because the
microkernel is a smaller kernel.
resulting operating system is easier to port from one hardware design to
microkernel also provides more security and reliability, since most services
are running as user-rather than kernel-processes.
a service fails, the rest of the operating system remains untouched.
contemporary operating systems have used the microkernel approach.
UNIX (formerly Digital UNIX) provides a UNIX interface to the user, but it is
implemented with a Mach kernel.
Mach kernel maps UNIX system calls into messages to the appropriate user-level
Mac OS X kernel (also known as Darwin) is also based on the Mach micro
example is QNX, a real-time operating system.
QNX microkernel provides services for message passing and process scheduling.
- It also handles
low-level network communication and hardware interrupts. All other services in
QNX are provided by standard processes that run outside the kernel in user
microkernels can suffer from performance decreases due to increased system
function overhead. Consider the history of Windows NT.
first release had a layered microkernel organization. However, this version delivered
low performance compared with that of Windows 95.
NT 4.0 partially redressed the performance problem by moving layers from user space
to kernel space and integrating them more closely.
- By the time
Windows XP was designed, its architecture was more monolithic than microkernel.
the best current methodology for operating-system design involves using
object-oriented programming techniques to create a modular kernel.
the kernel has a set of core components and links in additional services either
during boot time or during run time.
a strategy uses dynamically loadable modules and is common in modern
implementations of UNIX, such as Solaris, Linux, and Mac OS X.
example, the Solaris operating system structure, shown in Figure 15, is
organized around a core kernel with seven
- types of loadable
- Scheduling classes
- File systems
- Loadable system calls
- Executable formats
- STREAMS modules
- Device and bus drivers
Figure 15. Solaris loadable modules
- Such a design
allows the kernel to provide core services yet also allows certain features to
be implemented dynamically. For example, device and bus drivers for specific
hardware can be added to the kernel, and support for different file systems can
be added as loadable modules.
- The overall result
resembles a layered system in that each kernel section has defined, protected
interfaces; but it is more flexible than a layered system in that any module
can call any other module.
- Furthermore, the
approach is like the microkernel approach in that the primary module has only
core functions and knowledge of how to load and communicate with other modules;
but it is more efficient, because modules do not need to invoke message passing
in order to communicate.
Apple Mac OS X operating system uses a hybrid structure.
is a layered system in which one layer consists of the Mach microkernel.
structure of Mac OS X appears in Figure 16.
top layers include application environments and a set of services providing a
graphical interface to applications.
these layers is the kernel environment, which consists primarily of the Mach
microkernel and the BSD kernel.
provides memory management; support for remote procedure calls (RPCs) and
interprocess communication (IPC) facilities, including message passing; and
- The BSD component
provides a BSD command line interface, support for networking and file systems,
and an implementation of POSIX APIs, including Pthreads.
Figure 16. The Mac OS X Structure.
addition to Mach and BSD, the kernel environment provides an I/O kit for
development of device drivers and dynamically loadable modules (which Mac OS X
refers to as kernel extensions).
- As shown in the
figure, applications and comn:10n services can make use of either the Mach or
BSD facilities directly.
- MODERN OPERATING SYSTEMS by Andrew S. Tanenbaum, Second Edition
- The Operating System Concepts by Silberschatz, Galvin, Gagne, 8th Edition