Linux by A16067pH


									                                   UNIX SYSTEM PROGRAMMING

Unix (officially trademarked as UNIX, sometimes also written as UNIX in small caps) is
a multitasking, multi-user computer operating system originally developed in 1969 by a group
of AT&T employees at Bell Labs, including Ken Thompson, Dennis Ritchie, Brian Kernighan, Douglas
MicIlroy, Michael Lesk and Joe Ossanna. The Unix operating system was first developed in assembly
language, but by 1973 had been almost entirely recoded in C, greatly facilitating its further development
and porting to other hardware. Today's Unix system evolution is split into various branches, developed
over time by AT&T as well as various commercial vendors, universities (such as University of California,
Berkeley's BSD), and non-profit organizations.

The Open Group, an industry standards consortium, owns the UNIX trademark. Only systems fully
compliant with and certified according to the Single UNIX Specification are qualified to use the trademark;
others might be called Unix system-like or Unix-like, although the Open Group disapproves of this term.
However, the term Unix is often used informally to denote any operating system that closely resembles
the trademarked system.

During the late 1970s and early 1980s, the influence of Unix in academic circles led to large-scale
adoption of Unix (particularly of the BSD variant, originating from theUniversity of California, Berkeley) by
commercial startups, the most notable of which are Solaris, HP-UX, Sequent, and AIX, as well as Darwin,
which forms the core set of components upon which Apple's OS X, Apple TV, and iOS are
based.        Today, in addition to certified Unix systems such as those already mentioned, Unix-
like operating systems such as MINIX, Linux, and BSD descendants (FreeBSD, NetBSD, OpenBSD,
and DragonFly BSD) are commonly encountered. The term traditional Unix may be used to describe an
operating system that has the characteristics of either Version 7 Unix orUNIX System V.

Under Unix, the operating system consists of many utilities along with the master control program,
the kernel. The kernel provides services to start and stop programs, handles the file system and other
common "low level" tasks that most programs share, and schedules access to avoid conflicts when
programs try to access the same resource or device simultaneously. To mediate such access, the kernel
has special rights, reflected in the division between user-space and kernel-space.

The microkernel concept was introduced in an effort to reverse the trend towards larger kernels and
return to a system in which most tasks were completed by smaller utilities. In an era when a standard
computer consisted of a hard disk for storage and a data terminalfor input and output (I/O), the Unix file
model worked quite well, as most I/O was linear. However, modern systems include networking and other
new devices. As graphical user interfaces developed, the file model proved inadequate to the task of
handling asynchronous events such as those generated by a mouse. In the 1980s, non-blocking I/O and
the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared
memory, message queues, and semaphores. Functions such as network protocols were moved out of the

In 2005, Sun Microsystems released the bulk of its Solaris system code (based on UNIX System
V Release 4) into an open sourceproject called OpenSolaris. New Sun OS technologies, notably
the ZFS file system, were first released as open source code via the OpenSolaris project. Soon
afterwards, OpenSolaris spawned several non-Sun distributions. In 2010, after Oracle acquired Sun,
OpenSolaris was officially discontinued, but the development of derivatives continued.

2. Components of Unix
The Unix system is composed of several components that are normally packaged together. By including –
in addition to the kernel of an operating system – the development environment, libraries, documents, and
the portable, modifiable source-code for all of these components, Unix was a self-contained software
system. This was one of the key reasons it emerged as an important teaching and learning tool and has
had such a broad influence.

The inclusion of these components did not make the system large – the original V7 UNIX distribution,
consisting of copies of all of the compiled binaries plus all of the source code and documentation
occupied less than 10MB, and arrived on a single 9-track magnetic tape. The printed documentation,
typeset from the on-line sources, was contained in two volumes.

The names and filesystem locations of the Unix components have changed substantially across the
history of the system. Nonetheless, the V7 implementation is considered by many to have the canonical
early structure:

       Kernel – source code in /usr/sys, composed of several sub-components:
               conf – configuration and machine-dependent parts, including boot code
               dev – device drivers for control of hardware (and some pseudo-hardware)
               sys – operating system "kernel", handling memory management, process scheduling,
        system calls, etc.
               h – header files, defining key structures within the system and important system-specific
       Development Environment – Early versions of Unix contained a development environment
    sufficient to recreate the entire system from source code:
               cc – C language compiler (first appeared in V3 Unix)
               as – machine-language assembler for the machine
               ld – linker, for combining object files
               lib – object-code libraries (installed in /lib or /usr/lib). libc, the system library with C run-
        time support, was the primary library, but there have always been additional libraries for such
        things as mathematical functions (libm) or database access. V7 Unix introduced the first version
        of the modern "Standard I/O" library stdio as part of the system library. Later implementations
        increased the number of libraries significantly.
               make – build manager (introduced in PWB/UNIX), for effectively automating the build
               include – header files for software development, defining standard interfaces and system
               Other languages – V7 Unix contained a Fortran-77 compiler, a programmable arbitrary-
        precision calculator (bc, dc), and the awkscripting language, and later versions and
        implementations contain many other language compilers and toolsets. Early BSD releases
        included Pascal tools, and many modern Unix systems also include the GNU Compiler
        Collection as well as or instead of a proprietary compiler system.
               Other tools – including an object-code archive manager (ar), symbol-table lister (nm),
        compiler-development tools (e.g. lex &yacc), and debugging tools.
       Commands – Unix makes little distinction between commands (user-level programs) for system
    operation and maintenance (e.g.cron), commands of general utility (e.g. grep), and more general-
    purpose applications such as the text formatting and typesetting package. Nonetheless, some major
    categories are:
               sh – The "shell" programmable command line interpreter, the primary user interface on
        Unix before window systems appeared, and even afterward (within a "command window").
               Utilities – the core tool kit of the Unix command set, including cp, ls, grep, find and many
        others. Subcategories include:

                     System utilities – administrative tools such as mkfs, fsck, and many others.

                     User utilities – environment management tools such as passwd, kill, and others.

        Document formatting – Unix systems were used from the outset for document preparation and
        typesetting systems, and included many related programs such as nroff, troff, tbl, eqn, refer,
        and pic. Some modern Unix systems also include packages such as TeX and Ghostscript.

               Graphics – The plot subsystem provided facilities for producing simple vector plots in a
        device-independent format, with device-specific interpreters to display such files. Modern Unix
        systems also generally include X11 as a standard windowing system andGUI, and many
        support OpenGL.
               Communications – Early Unix systems contained no inter-system communication, but did
        include the inter-user communication programs mail and write. V7 introduced the early inter-
        system communication system UUCP, and systems beginning with BSD release 4.1c
        included TCP/IP utilities.
      Documentation – Unix was the first operating system to include all of its documentation online in
    machine-readable form. The documentation included:
         man – manual pages for each command, library component, system call, header file, etc.

               doc – longer documents detailing major subsystems, such as the C language and troff.
3. Impact of Unix.
The Unix system had significant impact on other operating systems. It won its success by:

       Direct interaction.
       Moving away from the total control of businesses like IBM and DEC.
       AT&T giving the software away for free.
       Running on cheap hardware.
       Being easy to adopt and move to different machines.
It was written in a high level language rather than assembly language (which had been thought necessary
for systems implementation on early computers). Although this followed the lead
of Multics and Burroughs, it was Unix that popularized the idea.

Unix had a drastically simplified file model compared to many contemporary operating systems: treating
all kinds of files as simple byte arrays. The file system hierarchy contained machine services and devices
(such as printers, terminals, or disk drives), providing a uniform interface, but at the expense of
occasionally requiring additional mechanisms such as ioctl and mode flags to access features of the
hardware that did not fit the simple "stream of bytes" model. The Plan 9 operating system pushed this
model even further and eliminated the need for additional mechanisms.

Unix also popularized the hierarchical file system with arbitrarily nested subdirectories, originally
introduced by Multics. Other common operating systems of the era had ways to divide a storage device
into multiple directories or sections, but they had a fixed number of levels, often only one level. Several
major proprietary operating systems eventually added recursive subdirectory capabilities also patterned
after Multics. DEC's RSX-11M's "group, user" hierarchy evolved into VMS directories, CP/M's volumes
evolved into MS-DOS2.0+ subdirectories, and HP's MPE group.account hierarchy and
IBM's SSP and OS/400 library systems were folded into broader POSIX file systems.

Making the command interpreter an ordinary user-level program, with additional commands provided as
separate programs, was another Multics innovation popularized by Unix. The Unix shell used the same
language for interactive commands as for scripting (shell scripts – there was no separate job control
language like IBM's JCL). Since the shell and OS commands were "just another program", the user could
choose (or even write) his own shell. New commands could be added without changing the shell itself.
Unix's innovative command-line syntax for creating modular chains of producer-consumer processes
(pipelines) made a powerful programming paradigm (coroutines) widely available. Many later command-
line interpreters have been inspired by the Unix shell.

A fundamental simplifying assumption of Unix was its focus on newline-delimited ASCII text for nearly all
file formats. There were no "binary" editors in the original version of Unix – the entire system was
configured using textual shell command scripts. The common denominator in the I/O system was the byte
– unlike "record-based" file systems. The focus on text for representing nearly everything made Unix
pipes especially useful, and encouraged the development of simple, general tools that could be easily
combined to perform more complicated ad hoc tasks. The focus on text and bytes made the system far
more scalable and portable than other systems. Over time, text-based applications have also proven
popular in application areas, such as printing languages (PostScript, ODF), and at the application layer of
the Internet protocols, e.g., FTP, SMTP, HTTP, SOAP, and SIP.

Unix popularized a syntax for regular expressions that found widespread use. The Unix programming
interface became the basis for a widely implemented operating system interface standard (POSIX, see

The C programming language soon spread beyond Unix, and is now ubiquitous in systems and
applications programming.

Early Unix developers were important in bringing the concepts of modularity and reusability into software
engineering practice, spawning a "software tools" movement.

Unix provided the TCP/IP networking protocol on relatively inexpensive computers, which contributed to
the Internet explosion of worldwide real-time connectivity, and which formed the basis for
implementations on many other platforms. This also exposed numerous security holes in the networking

The Unix policy of extensive on-line documentation and (for many years) ready access to all system
source code raised programmer expectations, and contributed to the 1983 launch of the free software

Over time, the leading developers of Unix (and programs that ran on it) established a set of cultural norms
for developing software, norms which became as important and influential as the technology of Unix itself;
this has been termed the Unix philosophy.

1.Unix operating systems
In 1983, Richard Stallman announced the GNU project, an ambitious effort to create a free software Unix-
like system; "free" in that everyone who received a copy would be free to use, study, modify, and
redistribute it. The GNU project's own kernel development project, GNU Hurd, had not produced a
working kernel, but in 1991 Linus Torvalds released the Linux kernel as free software under theGNU
General Public License. In addition to their use in the GNU/Linux operating system, many GNU packages
– such as the GNU Compiler Collection (and the rest of the GNU toolchain), the GNU C library and
the GNU core utilities – have gone on to play central roles in other free Unix systems as well.

Linux distributions, consisting of the Linux kernel and large collections of compatible software have
become popular both with individual users and in business. Popular distributions include Red Hat
Enterprise Linux, Fedora, SUSE Linux Enterprise, openSUSE, Debian GNU/Linux, Ubuntu, Linux
Mint, Mandriva Linux, Slackware Linux and Gentoo.

A free derivative of BSD Unix, 386BSD, was also released in 1992 and led to
the NetBSD and FreeBSD projects. With the 1994 settlement of a lawsuit that UNIX Systems
Laboratories brought against the University of California and Berkeley Software Design Inc. (USL v.
BSDi), it was clarified that Berkeley had the right to distribute BSD Unix – for free, if it so desired. Since
then, BSD Unix has been developed in several different directions, including OpenBSD and DragonFly
Linux and BSD are now rapidly occupying much of the market traditionally occupied by proprietary Unix
operating systems, as well as expanding into new markets such as the consumer desktop and mobile and
embedded devices. Due to the modularity of the Unix design, sharing bits and pieces is relatively
common; consequently, most or all Unix and Unix-like systems include at least some BSD code, and
modern systems also usually include some GNU utilities in their distributions.

OpenSolaris is a relatively recent addition to the list of operating systems based on free software licenses
marked as such by FSF andOSI. It includes a number of derivatives that combines CDDL-licensed kernel
and system tools and also GNU userland and is currently the only open source System V derivative

Main article: Year 2038 problem

Unix stores system time values as the number of seconds from midnight 1 January 1970 (the "Unix
Epoch") in variables of type time, historically defined as "signed long". On 19 January 2038 on 32 bit Unix
systems, the current time will roll over from a zero followed by 31 ones (0x7FFFFFFF) to a one followed
by 31 zeros (0x80000000), which will reset timeo the year 1901 or 1970, depending on implementation,
because that toggles the sign bit.

Since times before 1970 are rarely represented in Unix time, one possible solution that is compatible with
existing binary formats would be to redefine time as "unsigned 32-bit integer". However, such
a kludge merely postpones the problem to 7 February 2106, and could introduce bugs in software that
computes time differences.

Some Unix versions have already addressed this. For example, in Solaris and Linux in 64-bit mode is 64
bits long, meaning that the OS itself and 64-bit applications will correctly handle dates for some 292 billion
years. Existing 32-bit applications using a 32-bit continue to work on 64-bit Solaris systems but are still
prone to the 2038 problem. Some vendors have introduced an alternative 64-bit type and
corresponding API, without addressing uses of the standard The NetBSD Project decided to instead
bump to 64-bit in its 6th major release for both 32-bit and 64-bit architectures, supporting 32-bit in
applications compiled for a former NetBSD release via its binary compatibility layer.

In May 1975, DARPA documented in RFC 681 detailed very specifically why Unix was the operating
system of choice for use as anARPANET mini-host. The evaluation process was also documented. Unix
required a license that was very expensive with $20,000(US) for non-university users and $150 for an
educational license. It was noted that for an ARPA network-wide license Bell "were open to suggestions
in that area".

Specific features found beneficial were:

       Local processing facilities.
       Compilers.
       Editor.
       Document preparation system.
       Efficient file system and access control.
       Mountable and de-mountable volumes.
       Unified treatment of peripherals as special files.
       The network control program (NCP) was integrated within the Unix file system.
       Network connections treated as special files which can be accessed through standard Unix I/O
       The system closes all files on program exit.
       "desirable to minimize the amount of code added to the basic Unix kernel".

See also: List of Unix systems

In October 1993, Novell, the company that owned the rights to the Unix System V source at the time,
transferred the trademarks of Unix to the X/Open Company (now The Open Group), and in 1995 sold
the related business operations to Santa Cruz Operation. Whether Novell also sold the copyrights to the
actual software was the subject of a 2006 federal lawsuit, SCO v. Novell, which Novell won. The case
was appealed, but on 30 Aug 2011, the United States Court of Appeals for the Tenth Circuit affirmed the
trial decisions, closing the case. Unix vendor SCO Group Inc. accused Novell of slander of title.

The present owner of the trademark UNIX is The Open Group, an industry standards consortium. Only
systems fully compliant with and certified to the Single UNIX Specification qualify as "UNIX"

Sometimes a representation like Un*x, *NIX, or *N?X is used to indicate all operating systems similar to
Unix. This comes from the use of the asterisk (*) and the question mark characters as wildcard indicators
in many utilities. This notation is also used to describe other Unix-like systems, e.g., Linux, BSD, etc., that
have not met the requirements for UNIX branding from the Open Group.

The Open Group requests that UNIX is always used as an adjective followed by a generic term such
as system to help avoid the creation of a genericized trademark.

"Unix" was the original formatting, but the usage of "UNIX" remains widespread because, according
to Dennis Ritchie, when presenting the original UNIX paper to the third Operating Systems Symposium of
the American Association for Computing Machinery, “we had a new typesetter and troff had just been
invented and we were intoxicated by being able to produce small caps.” Many of the operating
system's predecessors and contemporaries used all-uppercase lettering, so many people wrote the name
in upper case due to force of habit.

Several plural forms of Unix are used casually to refer to multiple brands of Unix and Unix-like systems.
Most common is the conventional Unixes, but Unices, treating Unix as a Latin noun of the third
declension, is also popular. The pseudo-Anglo-Saxon plural form Unixen is not common, although
occasionally seen. Trademark names can be registered by different entities in different countries and
trademark laws in some countries allow the same trademark name to be controlled by two different
entities if each entity uses the trademark in easily distinguishable categories. The result is that Unix has
been used as a brand name for various products including book shelves, ink pens, bottled glue, diapers,
hair driers and food containers

2.Unix philosophy
The Unix philosophy is a set of cultural norms and philosophical approaches to
developing software based on the experience of leading developers of the Unix operating system.
MicIlroy: A Quarter Century of Unix
Doug MicIlroy , then head of the Bell Labs CSRC and contributor to Unix pipes summarised Unix
philosophy as follows:

This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work
together. Write programs to handle text streams, because that is a universal interface.

This is often abridged to 'write programs that do one thing and do it well'.

Eric Raymond
Eric S. Raymond, in his book The Art of Unix Programming, summarizes the Unix philosophy as the
widely-used KISS Principle of "Keep it Simple, Stupid." He also provides a series of design rules:

       Rule of Modularity: Write simple parts connected by clean interfaces.
       Rule of Clarity: Clarity is better than cleverness.
       Rule of Composition: Design programs to be connected to other programs.
       Rule of Separation: Separate policy from mechanism; separate interfaces from engines.
       Rule of Simplicity: Design for simplicity; add complexity only where you must.
        Rule of Parsimony: Write a big program only when it is clear by demonstration that nothing else
    will do.
       Rule of Transparency: Design for visibility to make inspection and debugging easier.
       Rule of Robustness: Robustness is the child of transparency and simplicity.
        Rule of Representation: Fold knowledge into data so program logic can be stupid and robust.
       Rule of Least Surprise: In interface design, always do the least surprising thing.
       Rule of Silence: When a program has nothing surprising to say, it should say nothing.
       Rule of Repair: When you must fail, fail noisily and as soon as possible.
       Rule of Economy: Programmer time is expensive; conserve it in preference to machine time.
       Rule of Generation: Avoid hand-hacking; write programs to write programs when you can.
       Rule of Optimization: Prototype before polishing. Get it working before you optimize it.
       Rule of Diversity: Distrust all claims for "one true way".
       Rule of Extensibility: Design for the future, because it will be here sooner than you think.

Mike Gancarz: The UNIX Philosophy
In 1994 Mike Gancarz (a member of the team that designed the X Window System), drew on his own
experience with Unix, as well as discussions with fellow programmers and people in other fields who
depended on Unix, to produce The UNIX Philosophy which sums it up in 9 paramount precepts:

    1. Small is beautiful.
    2. Make each program do one thing well.
    3. Build a prototype as soon as possible.
    4. Choose portability over efficiency.
    5. Store data in flat text files.
    6. Use software leverage to your advantage.
    7. Use shell scripts to increase leverage and portability.
    8. Avoid captive user interfaces.
    9. Make every program a filter.
Worse is better
Main article: Worse is better

Richard P. Gabriel suggests that a key advantage of Unix was that it embodied a design philosophy he
termed "worse is better", in which simplicity of both the interface and the implementation are more
important than any other attributes of the system—including correctness, consistency, and completeness.
Gabriel argues that this design style has key evolutionary advantages, though he questions the quality of
some results.

For example, in the early days Unix was a monolithic kernel (which means that user processes carried out
kernel system calls all on the user stack). If a signal was delivered to a process while it was blocked on a
long-term I/O in the kernel, then what should be done? Should the signal be delayed, possibly for a long
time (maybe indefinitely) while the I/O completed? The signal handler could not be executed when the
process was in kernel mode, with sensitive kernel data on the stack. Should the kernel back-out the
system call, and store it, for replay and restart later, assuming that the signal handler completes

In these cases Ken Thompson and Dennis Ritchie favored simplicity over perfection. The Unix system
would occasionally return early from a system call with an error stating that it had done nothing—the
"Interrupted System Call", or an error number 4 (EINTR) in today's systems. Of course the call had been
aborted in order to call the signal handler. This could only happen for a handful of long-running system
calls such as read(), write(), open(), and select(). On the plus side, this made the I/O system
many times simpler to design and understand. The vast majority of user programs were never affected
because they didn't handle or experience signals other than SIGINT or Control-C and would die right
away if one was raised. For the few other programs—things like shells or text editors that respond to job
control key presses—small wrappers could be added to system calls so as to retry the call right away if
thisEINTR error was raised. Thus, the problem was solved in a simple manner.


       "Unix is simple. It just takes a genius to understand its simplicity." – Dennis Ritchie
       "Unix was not designed to stop its users from doing stupid things, as that would also stop them
    from doing clever things." – Doug Gwyn
       "Unix never says 'please'." – Rob Pike
       "Unix is user-friendly. It just isn't promiscuous about which users it's friendly with." – Steven King
       "Those who don't understand Unix are condemned to reinvent it, poorly." – Henry Spencer
3.Unix architecture
A Unix architecture is a computer operating system system architecture that embodies the Unix
philosophy. It may adhere to standards such as the Single UNIX Specification (SUS) or
similar POSIX IEEE standard. No single published standard describes all Unix architecture computer
operating systems - this is in part a legacy of the Unix wars.
There are many systems which are Unix-like in their architecture. Notable among these are
the GNU/Linux distributions. The distinctions between Unix and Unix-like systems have been the subject
of heated legal battles, and the holders of the UNIX brand, The Open Group, object to "Unix-like" and
similar terms.

For distinctions between SUS branded UNIX architectures and other similar architectures, see Unix-like.

A Unix kernel — the core or key components of the operating system — consists of many kernel
subsystems like process management, memory management, file management, device management
and network management.

Each of the subsystems has some features:

       Concurrency: As Unix is a multiprocessing OS, many processes run concurrently to improve the
    performance of the system.
      Virtual memory (VM): Memory management subsystem implements the virtual memory concept
    and a user need not worry about the executable program size and the RAM size.
      Paging: It is a technique to minimize the internal as well as the external fragmentation in the
    physical memory.
        Virtual file system (VFS): A VFS is a file system used to help the user to hide the different file
    systems complexities. A user can use the same standard file system related calls to access different
    file systems.
The kernel provides these and other basic services: interrupt and trap handling, separation between user
and system space, system calls, scheduling, timer and clock handling, file descriptor management.

Some key features of the Unix architecture concept are:

       Unix systems use a centralized operating system kernel which manages system and process
       All non-kernel software is organized into separate, kernel-managed processes.
       Unix systems are preemptively multitasking: multiple processes can run at the same time, or
    within small time slices and nearly at the same time, and any process can be interrupted and moved
    out of execution by the kernel. This is known as threadmanagement.
       Files are stored on disk in a hierarchical file system, with a single top location throughout the
    system (root, or "/"), with both files and directories, subdirectories, sub-subdirectories, and so on
    below it.
       With few exceptions, devices and some types of communications between processes are
    managed and visible as files or pseudo-files within the file system hierarchy. This is known
    as everything is a file. However, Linus Torvalds states that this is inaccurate and may be better
    rephrased as "everything is a stream of bytes".
The UNIX operating system supports the following features and capabilities:

       Multitasking and multiuser.
       Programming interface.
       Use of files as abstractions of devices and other objects.
       Built-in networking. (TCP/IP is standard)
       Persistent system service processes called "daemons" and managed by init or inetd.
Some ideas may appear unconventional to new users. This is mainly rooted in the fact that UNIX grew

The UNIX-HATERS Handbook covers some of these design failures from the user point of view.
However, although some information is quite dated and cannot be applied to modern Unixes such
as Linux, Eric S. Raymond discovered that several issues are still prevailing, while others were resolved.
Raymond concludes that not all concepts behind Unix can be deemed as non-functional even though the
book's intention may have been to portray Unix as inferior without encouraging discussions with
developers to actually fix the issues.

The architecture of Windows NT, a line of operating systems produced and sold by Microsoft, is a
layered design that consists of two main components, user mode and kernel mode. It is
a preemptive, reentrant operating system, which has been designed to work
with uniprocessor and symmetrical multi processor (SMP)-based computers. To process input/output (I/O)
requests, they use packet-driven I/O, which utilizes I/O request packets (IRPs) and asynchronous I/O.
Starting withWindows 2000, Microsoft began making 64-bit versions of Windows available—before this,
these operating systems only existed in 32-bit versions.

Programs and subsystems in user mode are limited in terms of what system resources they have access
to, while the kernel mode has unrestricted access to the system memory and external devices. The
Windows NT kernel is known as ahybrid kernel. The architecture comprises a simple kernel, hardware
abstraction layer (HAL), drivers, and a range of services (collectively named Executive), which all exist in
kernel mode.

4. Operating system
An operating system (OS) is a collection of software that manages computer hardwareresources and
provides common services for computer programs. The operating system is a vital component of
the system software in a computer system. Application programs usually require an operating system to
Time-sharing operating systems schedule tasks for efficient use of the system and may also include
accounting for cost allocation of processor time, mass storage, printing, and other resources.

For hardware functions such as input and output and memory allocation, the operating system acts as an
intermediary between programs and the computer hardware, although the application code is usually
executed directly by the hardware and will frequently make a system call to an OS function or be
interrupted by it. Operating systems can be found on almost any device that contains a computer—
from cellular phones andvideo game consoles to supercomputers and web servers.

Examples of popular modern operating systems include Android, BSD, iOS, Linux, Mac OS X, Microsoft
Windows, Windows Phone, and IBM z/OS. All these, except Windows and z/OS, share roots in UNIX.

Types of operating systems
        A real-time operating system is a multitasking operating system that aims at executing real-time
        applications. Real-time operating systems often use specialized scheduling algorithms so that
        they can achieve a deterministic nature of behavior. The main objective of real-time operating
        systems is their quick and predictable response to events. They have an event-driven or time-
        sharing design and often aspects of both. An event-driven system switches between tasks based
        on their priorities or external events while time-sharing operating systems switch tasks based on
        clock interrupts.
        A multi-user operating system allows multiple users to access a computer system at the same
        time. Time-sharing systems and Internet servers can be classified as multi-user systems as they
        enable multiple-user access to a computer through the sharing of time. Single-user operating
        systems have only one user but may allow multiple programs to run at the same time.
        Multi-tasking vs. single-tasking
        A multi-tasking operating system allows more than one program to be running at a time, from the
        point of view of human time scales. A single-tasking system has only one running program. Multi-
        tasking can be of two types: pre-emptive or co-operative. In pre-emptive multitasking, the
        operating system slices the CPU time and dedicates one slot to each of the programs. Unix-like
        operating systems such as Solaris and Linux support pre-emptive multitasking, as
        does AmigaOS. Cooperative multitasking is achieved by relying on each process to give timeo
        the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative
        multi-tasking. 32-bit versions, both Windows NT and Win9x, used pre-emptive multi-tasking. Mac
        OS prior to OS X used to support cooperative multitasking.
             Further information: Distributed system
        A distributed operating system manages a group of independent computers and makes them
        appear to be a single computer. The development of networked computers that could be linked
        and communicate with each other gave rise to distributed computing. Distributed computations
        are carried out on more than one machine. When computers in a group work in cooperation, they
        make a distributed system.
        Embedded operating systems are designed to be used in embedded computer systems. They are
        designed to operate on small machines like PDAs with less autonomy. They are able to operate
        with a limited number of resources. They are very compact and extremely efficient by design.
        Windows CE and Minix 3 are some examples of embedded operating systems

UNIX and UNIX-like operating systems
Unix was originally written in assembly language . Ken Thompson wrote B, mainly based on BCPL,
based on his experience in the MULTICS project. B was replaced byC, and Unix, rewriten in C,
developed into a large, complex family of inter-related operating systems which have been influential in
every modern operating system (seeHistory).

The UNIX-like family is a diverse group of operating systems, with several major sub-categories
including System V, BSD, and Linux. The name "UNIX" is a trademark ofThe Open Group which licenses
it for use with any operating system that has been shown to conform to their definitions.

To top