Saturday 14 January 2012

operating system


1. Overview

1          What is micro kernel architecture? What are its benefits?                                        [4]
ans :-- :--  
        
In computer science, a micro kernel is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system (OS). These mechanisms include low-level address space management, thread management, and inter-process communication (IPC). If the hardware provides multiple rings or CPU modes, the micro kernel is the only software executing at the most privileged level (generally referred to as supervisor or kernel mode).[citation needed] Traditional operating system functions, such as device drivers, protocol stacks and file systems, are removed from the microkernel to run in user space.[citation needed] In source code size, microkernels tend to be under 10,000 lines of code, as a general rule. MINIX for example has around 4,000 lines of code. kernels larger than 20,000 lines are generally not considered microkernels.
Microkernels developed in the 1980s as a response to changes in the computer world, and several challenges adapting existing "mono-kernels" to these new systems. New device drivers, protocol stacks, file systems and other low-level systems were being developed all the time, code that was normally located in the monolithic kernel, and thus required considerable work and careful code management to work on. Microkernels were developed with the idea that all of these services would be implemented as user-space programs, like any other, allowing them to be worked on monolithically and started and stopped like any other program. This would not only allow these services to be more easily worked on, but also separated the kernel code to allow it to be finely tuned without worrying about unintended side effects. Moreover, it would allow entirely new operating systems to be "built up" on a common core, aiding OS research.

  1. Distinguish multiprocessing and multiprogramming.                                                         [4]
Ans :--  Multiprograming: In multiprogramming systems, the running task keeps running until it performs an operation that requires waiting for an external event (e.g. reading from a tape) or until the computer's scheduler forcibly swaps the running task out of the CPU. Multiprogramming systems are designed to maximize CPU usage.

Multitasking: In computing, multitasking is a method by which multiple tasks, also known as processes, share common processing resources such as a CPU. In the case of a computer with a single CPU, only one task is said to be running at any point in time, meaning that the CPU is actively executing instructions for that task. Multitasking solves the problem by scheduling which task may be the one running at any given time, and when another waiting task gets a turn. The act of reassigning a CPU from one task to another one is called a context switch.

Multiprocessing: Multiprocessing is a generic term for the use of two or more central processing units (CPUs) within a single computer system. There are many variations on this basic theme, and the definition of multiprocessing can vary with context, mostly as a function of how CPUs are defined (multiple cores on one die, multiple chips in one package, multiple packages in one system unit, etc.).

Multiprocessing sometimes refers to the execution of multiple concurrent software processes in a system as opposed to a single process at any one instant.


What are the basic functions of an operating system? - Operating system controls and coordinates the use of the hardware among the various applications programs for various uses. Operating system acts as resource allocator and manager. Since there are many possibly conflicting requests for resources the operating system must decide which requests are allocated resources to operating the computer system efficiently and fairly. Also operating system is control program which controls the user programs to prevent errors and improper use of the computer. It is especially concerned with the operation and control of I/O devices.
3.         What are the desirable features of a real-time operating system?                           [4]
ans :----- In general, an operating system (OS) is responsible for managing the hardware resources of a computer and hosting applications that run on the computer. An RTOS performs these tasks, but is also specially designed to run applications with very precise timing and a high degree of reliability. This can be especially important in measurement and automation systems where downtime is costly or a program delay could cause a safety hazard.
To be considered "real-time", an operating system must have a known maximum time for each of the operations that it performs (or at least be able to guarantee that maximum most of the time). Some of these operations include OS calls and interrupt handling. Operating systems that can absolutely guarantee a maximum time for these operations are referred to as "hard real-time", while operating systems that can only guarantee a maximum most of the time are referred to as "soft real-time". To fully grasp these concepts, it is helpful to consider an example.
Imagine that you are designing an airbag system for a new model of car. In this case, a small error in timing (causing the airbag to deploy too early or too late) could be catastrophic and cause injury. Therefore, a hard real-time system is needed. On the other hand, if you were to design a mobile phone that received streaming video, it may be ok to lose a small amount of data occasionally even though on average it is important to keep up with the video stream. For this application, a soft real-time operating system may suffice.
The main point is that, if programmed correctly, an RTOS can guarantee that a program will run with very consistent timing. Real-time operating systems do this by providing programmers with a high degree of control over how tasks are prioritized, and typically also allow checking to make sure that important deadlines are met. 

4.         What is the purpose of the Command Interpreter? Why is it usually separate from the Kernel?                                                                                                                     [4]
Ans :--- A command interpreter is an interface of the operating system with the user. The user gives commands with are executed by operating system (usually by turning them into system calls). The main function of a command interpreter is to get and execute the next user specified command. Command-Interpreter is usually not part of the kernel, since multiple command interpreters (shell, in UNIX terminology) may be support by an operating system, and they do not really need to run in kernel mode. There are two main advantages to separating the command interpreter from the kernel.
1. If we want to change the way the command interpreter looks, i.e., I want to change the interface of command interpreter, I am able to do that if the command interpreter is separate from the kernel. I cannot change the code of the kernel so I cannot modify the interface.
2. If the command interpreter is a part of the kernel it is possible for a malicious process to gain access to certain part of the kernel that it showed not have to avoid this ugly scenario it is advantageous to have the command interpreter separate from kernel


5.         A programmer has the option of writing a program that runs as multiple tasks or writing it to run as one multithreaded task. Name two advantages of writing it as a single multithreaded task.                                                                                               [4]
6          Point out the architectural features that must be present in order to support a Multi Programming Operating System along with proper justification.                            [4]
7          Suppose a new user wishes to do high level programming on a multi user computer system. The source code has already been developed on paper but the source file is yet to be created on disk. The system is already powered on. Specify, in brief, all the underlying system software that gets involved from the moment the new user logs on, to the point an .EXE file is created on Disk.                           [4]
8          Distinguish among the following means of invoking an executable code.
            i)          The Execute command as available in the Operating System.
ANS :-- The exec collection of functions of Unix-like operating systems cause the running process to be completely replaced by the program passed as an argument to the function. As a new process is not created, the process identifier (PID) does not change, but the data, heap and stack of the original process are replaced by those of the new process.In the execl, execlp, execv, and execvp calls, the new process image inherits the current environment variables.Files open when an exec call is made remain open in the new process. This aspect is used to specify the standard streams (stdin, stdout and stderr) of the new process.In MS-DOS environments, a program executed with one of the exec functions is always loaded into memory as if the "maximum allocation" in the program's executable file header is set to default value 0xFFFF. The EXEHDR utility can be used to change the maximum allocation field of a program. However, if this is done and the program is invoked with one of the exec functions, the program might behave differently from a program invoked directly from the operating-system command line or with one of the spawn functions.Many Unix shells also offer an exec built-in command that replaces the shell process with the specified program. Wrapper scripts often use this command to run a program (either directly or through an interpreter or virtual machine) after setting environment variables or other configuration. By using exec, the resources used by the shell program do not need to stay in use after the program is started.

Prototypes

The functions are declared in the unistd.h header for the POSIX standard and in process.h for DOS, OS/2, and Windows.
int execl(char const *path, char const *arg0, ...);
int execle(char const *path, char const *arg0, ..., char const * const *envp);
int execlp(char const *file, char const *arg0, ...);
int execv(char const *path, char const * const * argv);
int execve(char const *path, char const * const *argv, char const * const *envp);
int execvp(char const *file, char const * const *argv);
Some implementations provide these functions named with a leading underscore (e.g. _execl).

Function names

The base of each function is exec, followed by one or more letters:
e - An array of pointers to environment variables is explicitly passed to the new process image.
l - Command-line arguments are passed individually to the function.
p - Uses the PATH environment variable to find the file named in the path argument to be executed.
v - Command-line arguments are passed to the function as an array of pointers.

path

The path argument specifies the path name of the file to execute as the new process image. Arguments beginning at arg0 are pointers to arguments to be passed to the new process image. The argv value is an array of pointers to arguments.

arg0

The first argument arg0 should be the name of the executable file. Usually it is the same value as the path argument. Some programs may incorrectly rely on this argument providing the location of the executable, but there is no guarantee of this nor is it standardized across platforms.

envp

Argument envp is an array of pointers to environment settings. The exec calls named ending with an e alter the environment for the new process image by passing a list of environment settings through the envp argument. This argument is an array of character pointers; each element (except for the final element) points to a null-terminated string defining an environment variable.
Each null-terminated string has the form:
name=value
where name is the environment variable name, and value is the value of that that variable. The final element of the envp array must be null. If envp itself is null, the new process inherits the current environment settings.

Return value

Normally the exec function will replace the current process, so it cannot return anything to the original process. Processes do have an exit status, but that value is collected by the parent process.If an exec function does return to the calling process, an error occurred, the return value is −1, and errno is set to one of the following values:
Name
Notes
E2BIG
The argument list exceeds the system limit.
EACCES
The specified file has a locking or sharing violation.
ENOENT
The file or path name not found.
ENOMEM
Not enough memory is available to execute the new process image.
 
 

ii)                 The CALL instruction as provided in the Assembly Instruction Set.

Ans :-- The CALL instruction interrupts the flow of an exec by passing control to an internal or external subroutine. An internal subroutine is part of the calling exec. An external subroutine is another exec. The RETURN instruction returns control from a subroutine back to the calling exec and optionally returns a value.
When calling an internal subroutine, CALL passes control to a label specified after the CALL keyword. When the subroutine ends with the RETURN instruction, the instructions following CALL are executed.
When calling an external subroutine, CALL passes control to the exec name that is specified after the CALL keyword. When the external subroutine completes, you can use the RETURN instruction to return to where you left off in the calling exec.
 
iii)              A system call provided to invoke Operating System services.

ANS :-- The invocation of an operating system routine. Operating systems contain sets of routines for performing various low-level operations. For example, all operating systems have a routine for creating a directory. If you want to execute an operating system routine from a program, you must make a system call. System calls provide an interface between the process an the operating system. System calls allow user-level processes to request some services from the operating system which process itself is not allowed to do. In handling the trap, the operating system will enter in the kernel mode, where it has access to privileged instructions, and can perform the desired service on the behalf of user-level process. It is because of the critical nature of operations that the operating system itself does them every time they are needed. For example, for I/O a process involves a system call telling the operating system to read or write particular area and this request is satisfied by the operating system.
System programs provide basic functioning to users so that they do not need to write their own environment for program development (editors, compilers) and program execution (shells). In some sense, they are bundles of useful system calls.

iv)       Invocation caused due to an erroneous computation condition like division by zero.
v)         A code invoked due to servicing of hardware interrupt coming from a peripheral device.                        

Ans                 Interrupts

A  interrupt is a signal that stops the current program forcing it to execute another program immediately. The interrupt does this without waiting for the current program to finish. It is unconditional and immediate which is why it is called an interrupt.
The whole point of a interrupt is that the main program can perform a task without worrying about an external event.

Example

For example if you want to read a push button connected to one pin of an input port you could read it in one of two ways either by polling it or by using interrupts.

Polling

Polling is simply reading the button input regularly. Usually you need to do some other tasks as well e.g. read an RS232 input so there will be a delay between reads of the button. As long as the delays are small compared to the speed of the input change then no button presses will be missed.
If however you had to do some long calculation then a keyboard press could be missed while the processor is busy. With polling you have to be more aware of the processor activity so that you allow enough time for each essential activity.

Hardware interrupt

By using a hardware interrupt driven button reader the calculation could proceed with all button presses captured. With the interrupt running in the background you would not have to alter the calculation function to give up processing time for button reading.
The interrupt routine obviously takes processor time – but you do not have to worry about it while constructing the calculation function.
You do have to keep the interrupt routine small compared to the processing time of the other functions - its no good putting tons of operations into the ISR. Interrupt routines should be kept short and sweet so that the main part of the program executes correctly e.g. for lots of interrupts you need the routine to finish quickly ready for the next one.

Hardware Interrupt or polling ?

The benefit of the hardware interrupt is that processor time is used efficiently and not wasted polling input ports. The benefit of polling is that it is easy to do.
Another benefit of using interrupts is that in some processors you can use a wake-from-sleep interrupt. This lets the processor go into a low power mode, where only the interrupt hardware is active, which is useful if the system is running on batteries.

Hardware interrupt Common terms

Terms you might hear associated with hardware interrupts are ISR, interrupt mask, non maskable interrupt, an asynchronous event, interrupt vector and context switching.
ISR
Interrupt Service Routine.
Interrupt vector
The address that holds the location of the ISR.
Interrupt mask
Controls which interrupts are active.
NMI
Non Maskable Interrupt - an interrupt that is always active.
Asynchronous event
An event that could happen at any time.
Context switching
Saving/restoring data before & after the ISR.

ISR

An ISR is simply another program function. It is no different to any other function except that it does context switching (saving/restoring processor registers) and at the end of the function it re-enables interrupts.
After the ISR finishes program execution returns to the original program and it continues exactly from where it was interrupted. The orriginal program will have no idea that this has happened.

Hardware Interrupt vector

This is a fixed address that contains the location of your ISR – for a PIC micro it is usually address 4. In other micros there may be an interrupt vector for each vector – you have to make an interrupt routine for each one that you want to use. For PIC micros you have just one interrupt and you have to detect which interrupt triggered by examining the interrupt flag register(s).
You program the interrupt address with the address of your interrupt routine. Whenever the interrupt is triggered (and if the interrupt is unmasked) program operation jumps to the location of your interrupt routine.
Note: high level language compilers take care of all of this for you - in 'C' you just declare the function using the keyword interrupt (as the type returned from the function). It then puts the address of this routine in the interrupt vector.

NMI

The NMI is exactly the same as a normal interrupt except that you can not control whether or not it is active - it's always active. It is more commonly found on larger (non-microcontroller) processors as a dedicated input pin. In this case it is more than likely fed with a brown-out (PSU voltage dip) detector circuit.
You don't need it in a microcontroller as you can achieve exactly the same functionality using a programmable interrupt and many microcontrollers have built in BODs.

Asynchronous event

This is simply an event that is not synchronised to the processor clock. It is an event that the processor can not predict e.g. A button press.

Context switching

This means that the register state is preserved at the start of an ISR and restored at the end of an ISR. It ensures that the function that was interrupted is not affected by the ISR.

A Basic Interrupt System.

This is a general description of an interrupt system (biased slightly to PIC micros) which is true for any interrupt module and it is useful for understanding how to control and use interrupts.
Hardware Interrupt Block Diagram
Interrupt tutorial block diagram.

Input sources.

These are signals that start an interrupt. They can be external pins (where a rising or falling edge of a input triggers the interrupt) or internal peripheral interrupts e.g. ADC completed, Serial Rx data received, timer overflow etc. (The 16F877 has 15 interrupt sources).
Note: you can ususlly control which edge (rising or falling) is used by setting bits in yet another register!

Interrupt flags.

Each hardware interrupt source has an associated interrupt flag - whenever the interrupt is triggered the corresponding interrupt flag is set. These flags are usually stored as bits within an interrupt register.
The processor can read from and write to the interrupt register, reading from it to find out which interrupts occurred and writing to it to clear the interrupt flags.

Interrupt mask

The interrupt mask has a set of bits identical to those in the interrupt register. Setting any bit (or unmasking) lets the corresponding signal source generate an interrupt - causing the processor to execute the ISR.
Note: When a bit in the mask is clear (masked) the ISR is not activated for that signal source but the interrupt register is still set by the signal source. So you can still detect what the hardware is doing by polling the interrupt register.

Hardware interrupt vector

The interrupt vector is a location in memory that you program with the address of your interrupt service routine (ISR). Whenever an unmasked interrupt occurs program execution starts from the address contained in the interrupt vector.
                                                                  [15]
b)         Apart from all the above features, what other hardware support(s) [if any] are needed to implement a Time Shared Operating system that also has a high speed peripheral like the disk drive? Justify your answer.                                                                                [3]

January-2006 [8]
1.
                                [4]
g)         How can a single copy of a text editor be used to serve multiple users in a time-sharing system?                                                                                                                     [4]

July-2006 [14]
1.         Answer the following questions.
a)         Briefly describe the four major resource manages in a typical operating system.           [4]
ans:--- This is an alternate view about the services performed by the OS. The OS system provides an orderly and controlled allocation of the processors, memories and I/O devices. When a computer has multiple users the need for managing and protecting the memory, I/O devices and other devices is greater. Thus the primary task of OS is to keep track of who is using which resource, to grant resource requests, to mediate conflicting requests from different programs etc. Resource management includes multiplexing resources in two ways - "in time" and "in space". (i)When a resource is time multiplexed different programs or different users gets their turn to use that resource. eg: Printer. (ii)When a resource is space multiplexed instead of taking turns, the resource is shared among them, ie each one gets a part of the resource. eg: Sharing main memory, hard disk etc. 1. Memory Organization
2. File System (and disk) management
3. Program execution + Multitasking (in modern OSs)
4. Hardware detection and management




3.
a)         Identify and explain three styles of switching from user mode to supervisor mode.                  [6]
  1. ans :-- Interrupts, when a device sends an interrupt to the CPU:--
our keyboard is a very simple input device; simple because it generates small amounts of data very slowly (by a computer's standards). When you press or release a key, that event is signalled up the keyboard cable to raise a hardware interrupt. It's the operating system's job to watch for such interrupts. For each possible kind of interrupt, there will be an interrupt handler, a part of the operating system that stashes away any data associated with them (like your keypress/keyrelease value) until it can be processed. What the interrupt handler for your keyboard actually does is post the key value into a system area near the bottom of memory. There, it will be available for inspection when the operating system passes control to whichever program is currently supposed to be reading from the keyboard. More complex input devices like disk or network cards work in a similar way. Earlier, I referred to a disk controller using the bus to signal that a disk request has been fulfilled. What actually happens is that the disk raises an interrupt. The disk interrupt handler then copies the retrieved data into memory, for later use by the program that made the request. Every kind of interrupt has an associated priority level. Lower-priority interrupts (like keyboard events) have to wait on higher-priority interrupts (like clock ticks or disk events). Unix is designed to give high priority to the kinds of events that need to be processed rapidly in order to keep the machine's response smooth. In your operating system's boot-time messages, you may see references to IRQ numbers. You may be aware that one of the common ways to misconfigure hardware is to have two different devices try to use the same IRQ, without understanding exactly why.
Here's the answer. IRQ is short for "Interrupt Request". The operating system needs to know at startup time which numbered interrupts each hardware device will use, so it can associate the proper handlers with each one. If two different devices try use the same IRQ, interrupts will sometimes get dispatched to the wrong handler. This will usually at least lock up the device, and can sometimes confuse the OS badly enough that it will flake out or crash.
  1. When a program executes a system call, which is usually implemented by a trap instruction.:---
There are two techniques by which a program executing in user mode can request the kernel's services:
Operating systems are designed with one or the other of these two facilities, but not both. First, assume that a user process wishes to invoke a particular target system function. For the system call approach, the user process uses the trap instruction. The idea is that the system call should appear to be an ordinary procedure call to the application program; the OS provides a library of user functions with names corresponding to each actual system call. Each of these stub functions contains a trap to the OS function. When the application program calls the stub, it executes the trap instruction, which switches the CPU to kernel mode, and then branches (indirectly through an OS table), to the entry point of the function which is to be invoked. When the function completes, it switches the processor to user mode and then returns control to the user process; thus simulating a normal procedure return.In the message passing approach, the user process constructs a message, that describes the desired service. Then it uses a trusted send function to pass the message to a trusted OS process. The send function serves the same purpose as the trap; that is, it carefully checks the message, switches the processor to kernel mode, and then delivers the message to a process that implements the target functions. Meanwhile, the user process waits for the result of the service request with a message receive operation. When the OS process completes the operation, it sends a message back to the user process.The distinction between the two approaches has important consequences regarding the relative independence of the OS behavior, from the application process behavior, and the resulting performance. As a rule of thumb, operating system based on a system call interface can be made more efficient than those requiring messages to be exchanged between distinct processes. This is the case, even though the system call must be implemented with a trap instruction; that is, even though the trap is relatively expensive to perform, it is more efficient than the message passing approach, where there are generally higher costs associated with process multiplexing, message formation and message copying. The system call approach has the interesting property that there is not necessarily any OS process. Instead, a process executing in user mode changes to kernel mode when it is executing kernel code, and switches back to user mode when it returns from the OS call. If, on the other hand, the OS is designed as a set of separate processes, it is usually easier to design it so that it gets control of the machine in special situations, than if the kernel is simply a collection of functions executed by users processes in kernel mode. Even procedure-based operating system usually find it necessary to include at least a few system processes (called daemons in UNIX) to handle situation whereby the machine is otherwise idle such as scheduling and handling the network.

  1. When a program performs an operation that causes a hardware exception, such as divide by zero, illegal memory access or execution of an illegal opcode:-- A process's execution may result in the generation of a hardware exception, for instance, if the process attempts to divide by zero or incurs a TLB miss. In Unix-like operating systems, this event automatically changes the processor context to start executing a kernel exception handler. In case of some exceptions, such as a page fault, the kernel has sufficient information to fully handle the event itself and resume the process's execution. Other exceptions, however, the kernel cannot process intelligently and it must instead defer the exception handling operation to the faulting process. This deferral is achieved via the signal mechanism, wherein the kernel sends to the process a signal corresponding to the current exception. For example, if a process attempted to divide by zero on an x86 CPU, a divide error exception would be generated and cause the kernel to send the SIGFPE signal to the process. Similarly, if the process attempted to access a memory address outside of its virtual address space, the kernel would notify the process of this violation via a SIGSEGV signal. The exact mapping between signal names and exceptions is obviously dependent upon the CPU, since exception types differ between architectures.

January-2008 [8]
1.
a)         List two salient features of each of the following types of systems:
            iii)       Time Sharing
            iv)       Real Time Systems                                                                                  [4]
ans :--- In general, an operating system (OS) is responsible for managing the hardware resources of a computer and hosting applications that run on the computer. An RTOS performs these tasks, but is also specially designed to run applications with very precise timing and a high degree of reliability. This can be especially important in measurement and automation systems where downtime is costly or a program delay could cause a safety hazard.
To be considered "real-time", an operating system must have a known maximum time for each of the operations that it performs (or at least be able to guarantee that maximum most of the time). Some of these operations include OS calls and interrupt handling. Operating systems that can absolutely guarantee a maximum time for these operations are referred to as "hard real-time", while operating systems that can only guarantee a maximum most of the time are referred to as "soft real-time". To fully grasp these concepts, it is helpful to consider an example.
Imagine that you are designing an airbag system for a new model of car. In this case, a small error in timing (causing the airbag to deploy too early or too late) could be catastrophic and cause injury. Therefore, a hard real-time system is needed. On the other hand, if you were to design a mobile phone that received streaming video, it may be ok to lose a small amount of data occasionally even though on average it is important to keep up with the video stream. For this application, a soft real-time operating system may suffice.
The main point is that, if programmed correctly, an RTOS can guarantee that a program will run with very consistent timing. Real-time operating systems do this by providing programmers with a high degree of control over how tasks are prioritized, and typically also allow checking to make sure that important deadlines are met. 

3.
b)         What is meant by device independent I/O software?                                               [4]
4. Advantages and disadvantages of multithreading operating system?
Advantages of Multithreading
- Better responsiveness to the user - if there are operations which can take long time to be done, these operations can be put in a separate thread, and the application will preserve its responsiveness to the user input - Faster application - the work will be done by more objects

Disadvantages of Multithreading
- Creating overhead for the processor - each time the CPU finishes with a thread it should write in the memory the point it has reached, because next time when the processor starts that thread again it should know where it has finished and where to start from - The code become more complex - using thread into the code makes the code difficult read and debug - Sharing the resources among the threads can lead to deadlocks or other unexpected problems



No comments:

Post a Comment