Saturday 14 January 2012

operating system(2)


2. Process Management

1          State an advantage in having different time-quantum sizes for different levels of a multilevel queuing system?                                                                                      [4]
2          Why special hardware Instructions are necessary for solving critical-section problem In a multiprocessor environment?                                                                                        [4]
3          Describe the queuing implementation of semaphore with reference to processes that wait on a particular semaphore and what happens when the corresponding resource is released:                                                                                                         [10]
4          Differentiate between synchronous and asynchronous message passing. What is a mailbox?                                                                                                              [8]
5          What is your opinion about deadlock prevention strategy? Describe an approach to detect deadlock In a system. What are the possible recovery strategies once deadlock is detected?                                                                                                             [12]
6          Explain the concept of Fair-share scheduling.                                                            [6]
ans :-- Fair-share scheduling is a scheduling strategy for computer operating systems in which the CPU usage is equally distributed among system users or groups, as opposed to equal distribution among processes.
For example, if four users (A,B,C,D) are concurrently executing one process each, the scheduler will logically divide the available CPU cycles such that each user gets 25% of the whole (100% / 4 = 25%). If user B starts a second process, each user will still receive 25% of the total cycles, but both of user B's processes will now use 12.5%. On the other hand, if a new user starts a process on the system, the scheduler will reapportion the available CPU cycles such that each user gets 20% of the whole (100% / 5 = 20%).
Another layer of abstraction allows us to partition users into groups, and apply the fair share algorithm to the groups as well. In this case, the available CPU cycles are divided first among the groups, then among the users within the groups, and then among the processes for that user. For example, if there are three groups (1,2,3) containing three, two, and four users respectively, the available CPU cycles will be distributed as follows:
  • 100% / 3 groups = 33.3% per group
  • Group 1: (33.3% / 3 users) = 11.1% per user
  • Group 2: (33.3% / 2 users) = 16.7% per user
  • Group 3: (33.3% / 4 users) = 8.3% per user
One common method of logically implementing the fair-share scheduling strategy is to recursively apply the round-robin scheduling strategy at each level of abstraction (processes, users, groups, etc.) The time quantum required by round-robin is arbitrary, as any equal division of time will produce the same results. Fair-share scheduling is a method of allotting CPU time to multiple processes in a defined way. When multiple processes need a CPU, operating systems frequently allot CPU time slices on a round robin, equal time basis for processes with the same priority. An example is that if there are 4 processes wanting the CPU then each process will get 1/4 of a second every second. This is not to say that each process will get 1/4 of a second all at one time. Most operating systems define a time slice, something like 50 msec (arbitrary number that doesn't necessarily represent any OS) and will give each process a time slice. So in this completely made up case, Each process will get 5 time slices every second.
Long ago someone said that we need a way of giving people with legitimate requirements more of the time than less needy people. "Need" often reflects what level of service the user wants to pay for. The fair-share scheduler was born. There have been a number of implementations over the years, but on of the most common was to simply assign weights to users such that one user may get twice as many time slices in a given time period as others. This has been expanded often to groups so that processes belonging to a group of users would be given the same number of time slices as another group.
For example if group A had 8 processes running and group B had 4, then each group would get half the time, meaning that each process in the A group would only get 6.25% of the time (8*6.5 = 50) and each process in group B would get 12.5% of the time.
There are a number of different implementations of the fair-share scheduler other than the ones described here. I know of one that actually kept track of the accumulated time over a time period for a user/group and would use that information in it's calculations. The theory was that if you'd been a cpu hog in the past, you might not get as much time right now. This calculation also was also time weighted such that your share of time in the last second was much more important than a second a few minutes ago.


7          What is the adventage and disadvantage with user-level threads?                                                [4]
ans :-- User level threads :are for usefull if theres bigger user(application) task and user wants it to performed in parts,so divided into parts and assigned to threads these threads are user level threads

Kernel level threads :are operating system oriented approach to handle thread scheduling ,interrupt threads and provide kernel(drivers) necessary resources to work over.

Threads implemented in user mode have the advantage, that they can be used even if the kernel doesn't offer any threading. And thread switching can be very fast. Except from those two points, threads implemented in user mode have only disadvantages.

Threads implemented in user mode will not allow more than one thread to run at any time even if there are multiple CPUs. And even worse, if a thread is blocked in a system call, that thread will still occupy the process, so no other thread can execute because the process is sleeping.

I have heard of hybrid user/kernel mode thread implementations. But they have got to be very complicated, and I doubt they have any major advantages over an implementation 100% in kernel mode.

The only real advantage of user mode threads when kernel support for threads exists is the overhead for switching between two threads. And that is only significant if your program requires a lot of switches between threads.

. From the kernel's point of view, the only thing special about threaded programs compared to non-threaded programs, is the fact that multiple user tasks with same virtual memory space exists. And it will use this knowledge to do
1) Lazy switching, which means only switch memory map when it is strictly necesarry.
2) Give tasks with same virtual memory space as the last used slightly higher priority when picking the next task to be scheduled.

I have not heard about other systems that have made the same effort to improve threading performance.
and I haven't seen any comparisions between threads using the Linux kernel, and threads implemented in user space, so I don't know if there is any significant difference in switching overhead.
8          Under what circumstances can a parent terminate the execution of its child?                  [4]
9          What is the problem with priority scheduling algorithm?                                                   [4]
ans :-- The basic idea is straightforward: each process is assigned a priority, and priority is allowed to run. Equal-Priority processes are scheduled in FCFS order. The shortest-Job-First (SJF) algorithm is a special case of general priority scheduling algorithm.
An SJF algorithm is simply a priority algorithm where the priority is the inverse of the (predicted) next CPU burst. That is, the longer the CPU burst, the lower the priority and vice versa.
Priority can be defined either internally or externally. Internally defined priorities use some measurable quantities or qualities to compute priority of a process.
Examples of Internal priorities are
  • Time limits.
  • Memory requirements.
  • File requirements,
        for example, number of open files.
  • CPU Vs I/O requirements.
Externally defined priorities are set by criteria that are external to operating system such as
  • The importance of process.
  • Type or amount of funds being paid for computer use.
  • The department sponsoring the work.
  • Politics.
Priority scheduling can be either preemptive or non preemptive
  • A preemptive priority algorithm will preemptive the CPU if the priority of the newly arrival process is higher than the priority of the currently running process.
  • A non-preemptive priority algorithm will simply put the new process at the head of the ready queue.
A major problem with priority scheduling is indefinite blocking or starvation. A solution to the problem of indefinite blockage of the low-priority process is aging. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long period of time.

 


10        What is the difference between a long-term scheduler and a short-term scheduler?      [3]
11        Consider the following set of process:
Process
Arrival Time
Processing Time
A
0
3
B
2
6
C
4
4
D
6
5
E
8
2
Define Turnaround Time. Compute the average turnaround time using each of the following scheduling policies:
i)          First Come First Served (FCFS),
ii)        Round Robin with time quantum = 4 units,
iii)       Shortest Process’ Job Next (SPN or SJN),
iv)       Shortest Remaining Time First (SRTF).
Which of the scheduling policies gives the best result? Comment on implementation aspects of that policy.                                                                                              [15]
12        The readers/writers problem is defined as follows: A data is shared among a number of processes. Some processes only read that area (reader) whereas some others write into it (writer). Following conditions have to be satisfied:
i)          Any number of readers may simultaneously read.
ii)        Only one writer may write at a time.
iii)       If a writer is writing, no reader may read the area.
Suggest a possible solution to the above problem with the additional criteria that writers have priority.                                                                                                               [12]
13        Consider the following MaximumClaims matrix, ResourceAllocation matrix and MaximumNoOfResources vector for four processes P1, P2, P3 and P4 and three resources R1, R2 and R3.

R1
R2
R3

R1
R2
R3
R1
R2
R3
P1
3
2
2
P1
1
0
0
9
3
6
P2
6
1
3
P2
5
1
1
MaximumNoOfResources
P3
3
1
4
P3
2
1
1
P4
4
2
2
P4
0
0
2
MaximumClaims
ResourceAllocation
i)          Suppose P1 requests for one unit each of R1 and R3 and is granted the request. Answer the following questions with respect to the above situation:
ii)        Find out the availability of each resource.
iii)       Is the situation safe? Give reasons for your answer.                                      [6]
14        What design issues have to be considered for scheduling on multiprocessors?              [3]

15        Which is the most commonly used scheduling algorithm, and why?                                [4]
2.
16        Suppose that a system is in an unsafe state. Show that it is possible for the processes to complete their execution without entering a deadlock state?                                       [8]
17        Explain, how user-level threads can be implemented (including setting a time limit on any thread’s execution) without kernel intervention, except the use of system calls.             [6]

18        List out some of the possible mechanisms/situations that can bring any user process from running State to Ready State in a Multitasking Operating System.                            [4]
19        How can a CPU be brought from user mode to Kernel mode? How it can be brought back from the Kernel Mode to User mode? Justify your answer.                                         [4]
20        Consider a typical Ready queue status of a computer system having a single processor with support for multiprogramming as given below.

Process No.
Arrival Time
in units
Service time
In units
Priority No
P0
5
100
3
P1
15
200
5
P2
13
250
1
P3
8
50
2
P4
20
300
4

            Assume the following:
i)    The Context switching time among the processes as well as between any process and the dispatcher = 5 units (fixed).
ii)  No Context switching time gets wasted to change the execution status from the dispatcher to the dispatched process.
iii) Dispatcher Execution time = 10 units (fixed). This Dispatcher has to execute once before any new process gets running.
iv) Enqueuing Time can be ignored.
v)   No preemption of any process is allowed.
For the Process Mix specified previously, compare the performances of the following non-preemptive scheduling policies in a manner illustrated later
i)    First come First Served (F.C.F.S).
ii)  Priority based. [Assuming lower Priority Number implies high priority].
iii) Shortest Job First (S.J.F).
21        Specify the GANTT Chart.                                                                                  [6]
22        Compute the average Turn around time(s) of the processes.                                   [6]
23        Compute the average Waiting time(s) of the processes.                      
24    Consider a Computing environment where a process is given the privilege of accessing an object only n times. Suggest a scheme for implementing this policy.                                [6]
25        What are the necessary & sufficient conditions for a deadlock to occur involving concurrently non- sharable & reusable resources? Out of these which one is best suited to implement deadlock prevention mechanism? Justify your answer with reference to the Dining Philosophers’ Problem that involves four (4) philosophers.                              [7]
26        Consider the following snapshot of a system.

            Process No.
Current Resource Allocation
Maximum Resource Requirement
Resource Available
P0
0 | 0 | 1 | 2
0 | 0 | 1 | 2
0 | 3 | 2 | 0
P1
1 | 0 | 0 | 0
1 | 7 | 5 | 0

P2
1 | 3 | 5 | 4
2 | 3 | 5 | 6

P3
0 | 6 | 3 | 2
0 | 6 | 5 | 2

P4
0 | 0 | 1 | 4
0 | 6 | 5 | 6


Answer the following.
i)    Is the system in a safe state? Verify your answer.
ii)  If a request from P4 arrives for (0 | 3 | 2 | 0) can the request be immediately granted? Justify your answer.                                                                                       [5]
27        Demonstrate that monitors, conditional critical regions and semaphores are all equivalent, in so far as the same type of synchronization problems can be implemented with them.                                                                                                                    [7]
28        Consider the following situations in a typical Multi Programmed Computer system:
*     The Operating System of the given Computer System employs pre-emptive FIRST COME FIRST SERVED Scheduling.
*     It employs semaphores to achieve Mutual Exclusion of the single printer it processes.
      -     Any process wishing to use the printer must do so in the KERNEL mode.
      -     All the processes in the KERNEL mode, possesses the Highest Priority.
*     The Printer is currently printing the Output generated by the process PA. The process PA has got some more tasks to complete even after printing is done.
*     Another process PB has been waiting for some time for the printer.
*     The CPU is currently executing the process PC.
*     Two other processes PD & PE are ready to be executed. The Process PE having arrived earlier.
i)          Identify the states of each of the processes PA, PB, PC, PD & PE along with proper justifications.
ii)        When the printer finishes printing the job associated with the process PA, depict the sequence of events that take place in a stepwise fashion. What will be the new states of each of the processes PA, PB, PC, PD & PE?                             [9]

29        difference between process and  thread?
                                                                                                                                [4]
Ans :--   What is Difference between thread and process? Answer
# 1
The major difference between threads and processes is 
1.Threads share the address space of the process that 
created it; processes have their own address.
 
2.Threads have direct access to the data segment of its 
process; processes have their own copy of the data segment 
of the parent process. 
 
3.Threads can directly communicate with other threads of 
its process; processes must use interprocess communication 
to communicate with sibling processes. 
 
4.Threads have almost no overhead; processes have 
considerable overhead.
 
5.New threads are easily created; new processes require 
duplication of the parent process.
 
6.Threads can exercise considerable control over threads of 
the same process; processes can only exercise control over 
child processes. 
 
7.Changes to the main thread (cancellation, priority 
change, etc.) may affect the behavior of the other threads 
of the process; changes to the parent process does not 
affect child processes. 

30        State the practical limitations of implementing non-preemptive SJF algorithm.              [4]
31        What is the difference between a long-term scheduler and a short-term scheduler?      [4]
32        What does ‘init’ do? What happens to the parent process id of a child when the parent terminates before its child? When does a child become ‘zombie’?                                 [8]
33        How does CPU time-slice affect the Round-Robin algorithm?                                            [4]
ans :-- Round-robin (RR) is one of the simplest scheduling algorithms for processes in an operating system. As the term is generally used, time slices are assigned to each process in equal portions and in circular order, handling all processes without priority (also known as cyclic executive). Round-robin scheduling is simple, easy to implement, and starvation-free. Round-robin scheduling can also be applied to other scheduling problems, such as data packet scheduling in computer networks.
The name of the algorithm comes from the round-robin principle known from other fields, where each person takes an equal share of something in turn.

 Process scheduling

In order to schedule processes fairly, a round-robin scheduler generally employs time-sharing, giving each job a time slot or quantum[1] (its allowance of CPU time), and interrupting the job if it is not completed by then. The job is resumed next time a time slot is assigned to that process. In the absence of time-sharing, or if the quanta were large relative to the sizes of the jobs, a process that produced large jobs would be favoured over other processes.
Example: If the time slot is 100 milliseconds, and job1 takes a total time of 250 ms to complete, the round-robin scheduler will suspend the job after 100 ms and give other jobs their time on the CPU. Once the other jobs have had their equal share (100 ms each), job1 will get another allocation of CPU time and the cycle will repeat. This process continues until the job finishes and needs no more time on the CPU.
  • Job1 = Total time to complete 250 ms (quantum 100 ms).
  1. First allocation = 100 ms.
  2. Second allocation = 100 ms.
  3. Third allocation = 100 ms but job1 self-terminates after 50 ms.
  4. Total CPU time of job1 = 250 ms.
Another approach is to divide all processes into an equal number of timing quanta such that the quantum size is proportional to the size of the process. Hence, all processes end at the same time.

Data packet scheduling

In best-effort packet switching and other statistical multiplexing, round-robin scheduling can be used as an alternative to first-come first-served queuing.
A multiplexer, switch, or router that provides round-robin scheduling has a separate queue for every data flow, where a data flow may be identified by its source and destination address. The algorithm lets every active data flow that has data packets in the queue to take turns in transferring packets on a shared channel in a periodically repeated order. The scheduling is work-conserving, meaning that if one flow is out of packets, the next data flow will take its place. Hence, the scheduling tries to prevent link resources from going unused.
Round-robin scheduling results in max-min fairness if the data packets are equally sized, since the data flow that has waited the longest time is given scheduling priority. It may not be desirable if the size of the data packets varies widely from one job to another. A user that produces large packets would be favored over other users. In that case fair queuing would be preferable.
If guaranteed or differentiated quality of service is offered, and not only best-effort communication, deficit round-robin (DRR) scheduling, weighted round-robin (WRR) scheduling, or weighted fair queuing (WFQ) may be considered.
In multiple-access networks, where several terminals are connected to a shared physical medium, round-robin scheduling may be provided by token passing channel access schemes such as token ring, or by polling or resource reservation from a central control station.
In a centralized wireless packet radio network, where many stations share one frequency channel, a scheduling algorithm in a central base station may reserve time slots for the mobile stations in a round-robin fashion and provide fairness. However, if link adaptation is used, it will take a much longer time to transmit a certain amount of data to "expensive" users than to others since the channel conditions differ. It would be more efficient to wait with the transmission until the channel conditions are improved, or at least to give scheduling priority to less expensive users. Round-robin scheduling does not utilize this. Higher throughput and system spectrum efficiency may be achieved by channel-dependent scheduling, for example a proportionally fair algorithm, or maximum throughput scheduling. Note that the latter is characterized by undesirable scheduling starvation.

ans :-- Round-robin (RR) is one of the simplest scheduling algorithms for processes in an operating system. As the term is generally used, time slices are assigned to each process in equal portions and in circular order, handling all processes without priority (also known as cyclic executive). Round-robin scheduling is simple, easy to implement, and starvation-free. Round-robin scheduling can also be applied to other scheduling problems, such as data packet scheduling in computer networks.
The name of the algorithm comes from the round-robin principle known from other fields, where each person takes an equal share of something in turn.

 Process scheduling

In order to schedule processes fairly, a round-robin scheduler generally employs time-sharing, giving each job a time slot or quantum[1] (its allowance of CPU time), and interrupting the job if it is not completed by then. The job is resumed next time a time slot is assigned to that process. In the absence of time-sharing, or if the quanta were large relative to the sizes of the jobs, a process that produced large jobs would be favoured over other processes.
Example: If the time slot is 100 milliseconds, and job1 takes a total time of 250 ms to complete, the round-robin scheduler will suspend the job after 100 ms and give other jobs their time on the CPU. Once the other jobs have had their equal share (100 ms each), job1 will get another allocation of CPU time and the cycle will repeat. This process continues until the job finishes and needs no more time on the CPU.
  • Job1 = Total time to complete 250 ms (quantum 100 ms).
  1. First allocation = 100 ms.
  2. Second allocation = 100 ms.
  3. Third allocation = 100 ms but job1 self-terminates after 50 ms.
  4. Total CPU time of job1 = 250 ms.
Another approach is to divide all processes into an equal number of timing quanta such that the quantum size is proportional to the size of the process. Hence, all processes end at the same time.

Data packet scheduling

In best-effort packet switching and other statistical multiplexing, round-robin scheduling can be used as an alternative to first-come first-served queuing.
A multiplexer, switch, or router that provides round-robin scheduling has a separate queue for every data flow, where a data flow may be identified by its source and destination address. The algorithm lets every active data flow that has data packets in the queue to take turns in transferring packets on a shared channel in a periodically repeated order. The scheduling is work-conserving, meaning that if one flow is out of packets, the next data flow will take its place. Hence, the scheduling tries to prevent link resources from going unused.
Round-robin scheduling results in max-min fairness if the data packets are equally sized, since the data flow that has waited the longest time is given scheduling priority. It may not be desirable if the size of the data packets varies widely from one job to another. A user that produces large packets would be favored over other users. In that case fair queuing would be preferable.
If guaranteed or differentiated quality of service is offered, and not only best-effort communication, deficit round-robin (DRR) scheduling, weighted round-robin (WRR) scheduling, or weighted fair queuing (WFQ) may be considered.
In multiple-access networks, where several terminals are connected to a shared physical medium, round-robin scheduling may be provided by token passing channel access schemes such as token ring, or by polling or resource reservation from a central control station.
In a centralized wireless packet radio network, where many stations share one frequency channel, a scheduling algorithm in a central base station may reserve time slots for the mobile stations in a round-robin fashion and provide fairness. However, if link adaptation is used, it will take a much longer time to transmit a certain amount of data to "expensive" users than to others since the channel conditions differ. It would be more efficient to wait with the transmission until the channel conditions are improved, or at least to give scheduling priority to less expensive users. Round-robin scheduling does not utilize this. Higher throughput and system spectrum efficiency may be achieved by channel-dependent scheduling, for example a proportionally fair algorithm, or maximum throughput scheduling. Note that the latter is characterized by undesirable scheduling starvation.

34        Show and explain an implementation of the classical producer-consumer (producer produces an item, keeps it in a buffer from where the consumer is picking it up) problem using semaphore.                                                                                                       [10]
35        Rewrite the following code introducing code parallelism wherever applicable:
                        For i = 1 to k
                                    a(i) = b(i) + c(i)
                        For j = 1 to k
                                    d(j) = x(j) - y(j)
                        For p = 1 to k
                                    x(p) = y(p) + b(p)
                        read(m,n,o,r)
                        q = m*n + r/o
                        write(q)                                                                                                      [4]
36        Using preemptive SJF (shortest-job-first) algorithm draw the Gannt chart and calculate the average waiting time for the following processes:

            Process                       Arrival time              Burst time
             P0                                        0                              6
             P1                                        2                              4
             P2                                        3                            10
             P3                                        7                              9
                                                                                                                                            [5]
37        What is deadlock? How can deadlock be prevented by not allowing “Hold and Wait” ? Is it a feasible policy?                                                                                                    [5]
38        How can synchronization be achieved when two processes communicate by message passing?                                                                                                                     [5]
39        Provide a programming example of multithreading giving improved performance over a single-threaded solution.                                                                                   [3]

40        Discuss why we have different processor modes and how these modes are used in typical operating systems.                                                                                  [4]
41        Describe the similarities and difference between two different threads running in the same process and two independent processes.                                                          [4]
42        How can a CPU be brought from user mode to kernel mode? How it can be brought back from the kernel mode to user mode? Justify your answer.                                          [4]
43
As a system administrator you have noticed that usage peaks between 10:00 AM to 5:00 PM and between 7:00 PM to 10:00 PM. The company’s CEO decided to call on you to design a system where during these peak hours there will be three levels of users. Users in level 1 are to enjoy better response time than users in level 2, who in turn will enjoy better response time than users in level 3. You are to design such a system so that all users will still get some progress, but with the indicated preferences in place.
44        Will a fixed priority scheme with pre-emption and 3 fixed priorities work? Why, or why not?                                                                                                                      [5]
45        Will a UNIX-style multi-feedback queue work? Why, or shy not?                                    [5]
46        If none of the above works, could you design a scheduling scheme that meets the requirements?                                                                                                                   [8]
47        Most round-robin schedulers use a fixed size quantum. Give an argument in favor of and against a small quantum.                                                                                    [6]
48        How one can prevent deadlock to take place in an operating system? Explain.              [6]
49        Define critical section problem for process synchronization. Explain why it is difficult to implement critical section problem.                                                                            [9]
50        Consider the following program fragment:
                        P (s1);
                        a++;
                        P (s2);
                        v++;
                        V (s2);
                        V (s1);
(s1, s2 are semaphores). All variables are automatic. Now, consider two threads running this fragment of code simultaneously, can there be a deadlock? Why, or why not.                [9]

51        How can one detect that a message is lost during Inter Process Communication?          [4]
52        Which are the possible events that may occur when a process is being executed by CPU?                                                                                                                               [4]
53        When does a process terminate? Which system call is used to terminate a process? Under what circumstances a parent process terminates a child process?                        [6]
54        Define Deadlock. Which are the conditions that should hold simultaneously in a system for a deadlock situation?                                                                                             [8]
55        Consider a system with five processes P0 through P4 and three resource types X, Y, Z.
            Following is the snapshot of system:
           

Allocation
Max
Available

X     Y     Z
X     Y     Z
X     Y     Z
P0
0     1      0
7     5      3
3     3      2
P1
2     0      0
3     2      2

P2
3     0      2
9     0      2

P3
2     1      1
2     2      2

P4
0     0      2
4     3      3


            Answer the following questions:
i)          State the Safety algorithm and Resource-Request algorithm
ii)        Is the system in a safe state?                                                                               [6]
56        What are the disadvantages of First Come First Server and Shortest Job First scheduling algorithm?                                                                                                             [4]
57        Suppose that the following processes arrive for execution at the times indicated. Each process will run the listed amount of time. Assume non-preemptive scheduling.
           
Process
Arrival Time
Burst Time
P1
0.0
8
P2
0.4
4
P3
1.0
1

What is the average turnaround time for these processes with the FCFS scheduling algorithm?                                                                                                                     [4]
58        A CPU scheduling algorithm determines an order for the execution of its scheduled processes. Given an processes to be scheduled on processor, how many possible different schedules are there? Give a formula in terms of n.                                         [4]

59        Does a process incur more execution overhead compared to a thread? Justify your answer.                                                                                                                           [4]
60        State the practical limitations of implementing non-preemptive SJF algorithm.              61            What is the disadvantage of user level threads?                                                                   [4]
62        Define Turnaround Time, Response Time and Waiting Time.                                           [6]
63        Consider the following set of processes:
                        Process           Arrival Time Processing time
                        A                     0                      3
                        B                      2                      6
                        C                     4                      4
            D                     6                      5
                        E                      8                      2
            Compute the average turnaround time using each of the following scheduling policies:
            i)          First Come First Served (FCFS)
            ii)        Round Robin with time quantum = 4 units
            iii)       Shortest Process/Job Next (SPN or SJN) and
            iv)       Shortest Remaining Time First (SRTF).
Which of the scheduling policies gives the best result? Comment on the implementation aspects of policy.                                                                                                     [12]
64        Show and explain an implementation of printer-computer (computer produces an item, keeps in a buffer from where the printer (consumer) is picking it up) problem using semaphore.                                                                                                          [10]
65        What is context switch? Why is it considered to be an overhead?                          [3]
66        What is the difference between a long-term scheduler and a short-term scheduler?      [3]
67.       Differentiate between synchronous and asynchronous message passing. What is a mailbox?                                                                                                             [4]
68        Describe Banker's algorithms to detect deadlock in a system. What are the possible recovery strategies once deadlock is detected?                                                           [10]

January-2008 [34]
69        Is mode switching the same as context switching? Give reasons for your answer.          [4]
70        Explain short-term, medium-term and long-term scheduling.                                            [4]
71        Explain, how Test and Set Lock instruction provides mutual exclusion for busy waiting.                                                                                                                                            [4]

ans :--
72        What is the difference between program and process? Explain various states in the life of a process.                                                                                                                  [5]
ans :--
73        Five batch jobs A through E, arrive at a computer center at almost the same time. They have estimated running times of 10, 6, 2, 4 and 8 minutes. Their (externally determined) priorities are 3, 5, 2, 1, and 4 respectively, with 5 being the highest priority. For each of the following scheduling algorithms, determine the mean process turnaround time. Ignore process-switching overhead.
1)         Round robin (Time quantum =2)
2)         Priority scheduling
3)         Shortest job first
For (1), assume that the system is multi-programmed, and that each job gets its fair share of the CPU. For (2) and (3), assume that only one job at a time runs, until it finishes. All jobs are completely CPU bound.                                                             [9]
74        Differentiate between deadlock detection, avoidance and prevention? Compare each on efficiency, safety and runtime cost parameters.                                                                 [8]

ans :-- Prevention

  • Removing the mutual exclusion condition means that no process may have exclusive access to a resource. This proves impossible for resources that cannot be spooled, and even with spooled resources deadlock could still occur. Algorithms that avoid mutual exclusion are called non-blocking synchronization algorithms.
  • The "hold and wait" conditions may be removed by requiring processes to request all the resources they will need before starting up (or before embarking upon a particular set of operations); this advance knowledge is frequently difficult to satisfy and, in any case, is an inefficient use of resources. Another way is to require processes to release all their resources before requesting all the resources they will need. This too is often impractical. (Such algorithms, such as serializing tokens, are known as the all-or-none algorithms.)
  • A "no preemption" (lockout) condition may also be difficult or impossible to avoid as a process has to be able to have a resource for a certain amount of time, or the processing outcome may be inconsistent or thrashing may occur. However, inability to enforce preemption may interfere with a priority algorithm. (Note: Preemption of a "locked out" resource generally implies a rollback, and is to be avoided, since it is very costly in overhead.) Algorithms that allow preemption include lock-free and wait-free algorithms and optimistic concurrency control.
  • The circular wait condition: Algorithms that avoid circular waits include "disable interrupts during critical sections", and "use a hierarchy to determine a partial ordering of resources" (where no obvious hierarchy exists, even the memory address of resources has been used to determine ordering) and Dijkstra's solution.

Avoidance

Deadlock can be avoided if certain information about processes are available in advance of resource allocation. For every resource request, the system sees if granting the request will mean that the system will enter an unsafe state, meaning a state that could result in deadlock. The system then only grants requests that will lead to safe states. In order for the system to be able to determine whether the next state will be safe or unsafe, it must know in advance at any time the number and type of all resources in existence, available, and requested. One known algorithm that is used for deadlock avoidance is the Banker's algorithm, which requires resource usage limit to be known in advance. However, for many systems it is impossible to know in advance what every process will request. This means that deadlock avoidance is often impossible.
Two other algorithms are Wait/Die and Wound/Wait, each of which uses a symmetry-breaking technique. In both these algorithms there exists an older process (O) and a younger process (Y). Process age can be determined by a timestamp at process creation time. Smaller time stamps are older processes, while larger timestamps represent younger processes.


Wait/Die
Wound/Wait
O needs a resource held by Y
O waits
Y dies
Y needs a resource held by O
Y dies
Y waits
It is important to note that a process may be in an unsafe state but would not result in a deadlock. The notion of safe/unsafe states only refers to the ability of the system to enter a deadlock state or not. For example, if a process requests A which would result in an unsafe state, but releases B which would prevent circular wait, then the state is unsafe but the system is not in deadlock.

 Detection

Often, neither avoidance nor deadlock prevention may be used. Instead, deadlock detection and process restart are used by employing an algorithm that tracks resource allocation and process states, and rolls back and restarts one or more of the processes in order to remove the deadlock. Detecting a deadlock that has already occurred is easily possible since the resources that each process has locked and/or currently requested are known to the resource scheduler or OS.
Detecting the possibility of a deadlock before it occurs is much more difficult and is, in fact, generally undecidable, because the halting problem can be rephrased as a deadlock scenario. However, in specific environments, using specific means of locking resources, deadlock detection may be decidable. In the general case, it is not possible to distinguish between algorithms that are merely waiting for a very unlikely set of circumstances to occur and algorithms that will never finish because of deadlock.
Deadlock detection techniques include, but is not limited to model checking. This approach constructs a finite state-model on which it performs a progress analysis and finds all possible terminal sets in the model. These then each represent a deadlock.



No comments:

Post a Comment