TutorChase logo
IB DP Computer Science Study Notes

6.2.2 Techniques in OS Resource Management

The Operating System (OS) is pivotal in managing a computer's resources, ensuring they are used efficiently by different programs. This involves a series of complex techniques to optimise the computer's performance.

Scheduling

Scheduling refers to how the OS prioritises and allocates CPU time to processes and tasks. Efficient scheduling is crucial for multitasking environments to ensure that all processes receive adequate CPU time.

Types of Scheduling Algorithms:

  • First-Come, First-Served (FCFS): This simple scheduling method queues processes in the order they arrive. It is fair but can lead to inefficient CPU usage, known as the convoy effect.
  • Shortest Job First (SJF): Processes with the smallest execution time are prioritised, reducing the average waiting time for all processes.
  • Round-Robin (RR): Each process is assigned a time slice and is cycled through in a round-robin fashion. It is particularly effective in time-sharing systems.
  • Priority-Based Scheduling: Each process is given a priority. Processes with higher priorities are executed first, while those with lower priorities may suffer from starvation if not managed properly.
  • Multilevel Queue Scheduling: Divides the ready queue into several smaller queues, each with its own scheduling algorithm. This caters to processes with various characteristics, such as foreground or background processes.

Policies

Policies are predefined strategies that govern the behaviour of the OS. These policies must be designed to be fair, efficient, and secure.

Examples of Resource Management Policies:

  • Memory Management Policies: Includes deciding which block of memory gets allocated to a process and when to swap processes in and out of the memory.
  • I/O Management Policies: Determines how input/output operations should be carried out, such as disk scheduling algorithms like FIFO and SCAN.
  • Security Policies: Define the framework for user authentication, access control, and audit. For example, Mandatory Access Control (MAC) and Discretionary Access Control (DAC).

Multitasking

Multitasking is the capability to run multiple tasks or processes simultaneously.

Approaches to Multitasking:

  • Cooperative Multitasking: Processes control CPU time by voluntarily giving up control. This can lead to monopolisation of CPU time if a process does not behave as expected.
  • Preemptive Multitasking: The OS preempts tasks to manage CPU time, which is more complex but ensures that no single process can hog resources.

Virtual Memory

Virtual memory allows the OS to use hard disk space to simulate additional RAM, which lets processes run even when they require more memory than is physically available on the system.

Components of Virtual Memory:

  • Page File: A section of the hard drive allocated by the OS to act as virtual RAM.
  • Paging: The process of swapping data between physical RAM and the page file.
  • Translation Lookaside Buffer (TLB): A cache used to reduce the time taken to access the memory locations in the page table.

Paging

Paging is a method of storing and retrieving data from secondary storage for use in main memory, allowing the OS to use data not currently in physical RAM.

Working with Paging:

  • Page Replacement Algorithms: Determine which memory pages to swap out, such as LRU or FIFO.
  • Thrashing: A condition where the OS spends more time paging than executing transactions. Proper management is essential to prevent this.

Interrupts

An interrupt is a signal from a device or from software to the processor indicating an event that needs immediate attention.

Handling Interrupts:

  • Interrupt Handlers: A special type of function in the OS that handles specific types of interrupts.
  • Nested Interrupts: A feature that allows an interrupt to be interrupted by another one, which is more urgent.
  • Maskable/Non-Maskable Interrupts: Some interrupts can be masked or ignored, while others cannot, to ensure critical issues are addressed.

Polling

Polling is a control scheme where the OS queries each device in turn to determine if it requires attention.

Characteristics of Polling:

  • Resource Intensiveness: Polling can consume considerable system resources, as the CPU must constantly check the status of devices.
  • Latency: There is typically a delay, or latency, between the event occurrence and the OS's response due to the polling interval.

By comprehending these resource management techniques, students can appreciate how an OS functions as a mediator between hardware and software, managing resources in a way that maximises the functionality and efficiency of a computer system. The techniques outline the intelligence and adaptability built into modern operating systems, allowing them to support complex computing environments and meet the demands of various applications.

FAQ

Interrupts and polling are two mechanisms for handling I/O operations, and they differ significantly in terms of CPU resource usage. With interrupts, the CPU is free to perform other tasks and only stops to address an interrupt when a device signals that it needs attention. This "interrupt-driven" approach is more efficient because it reduces CPU idle time waiting for I/O operations. Polling, in contrast, requires the CPU to actively check the status of each I/O device regularly, which can waste valuable CPU cycles if the device does not need attention at the time of polling. Hence, interrupts are generally preferred for efficiency but may not be suitable for all devices, especially those that require constant attention.

Device drivers are specialised software programs that provide an interface between the operating system and hardware devices. In the context of polling, the device driver regularly checks the status of its device to see if it needs processing. The driver is optimised for this task, knowing exactly when and how to check the device, and what actions to take if an operation is required. This can improve system efficiency by offloading the task of monitoring hardware from the CPU to the driver, which is specifically designed for this purpose. Additionally, well-designed drivers can reduce polling frequency or implement intelligent polling strategies that minimise the performance impact on the system.

Soft and hard real-time operating systems differ in their guarantees of service. A soft real-time system prioritises processing tasks based on their deadlines, but allows for occasional deadline misses, which may not significantly affect the system's performance. This is often suitable for applications where performance is critical but not mission-critical, such as streaming media. In contrast, a hard real-time system has strict timing constraints and guarantees that tasks will be performed within a specified deadline. Resource management in such systems is more stringent, often requiring dedicated hardware and predictable scheduling algorithms to ensure that deadlines are always met. This is crucial in systems where failure to process data in real-time could result in catastrophic outcomes, such as in life support systems or air traffic control.

Thrashing occurs when a system spends more time swapping pages in and out of the virtual memory than executing processes. This can happen when there is insufficient physical memory and too many processes are active, causing continuous page faults that saturate the I/O bandwidth. The operating system mitigates thrashing by implementing better page replacement algorithms, such as Least Recently Used (LRU), which can predict which pages will be used least in the future. It can also use a working set model to ensure that each process maintains a minimum number of pages in memory to continue operating efficiently. Furthermore, the OS may limit the number of active processes to prevent the system's demand for memory from exceeding the available amount.

In a priority-based scheduling system, each process is assigned a priority level by the operating system. This level can be determined based on the type of process (system vs user), the amount of resources required, the expected execution time, or the process importance. Often, system processes have higher priority than user-initiated tasks. Priorities can be static or dynamic; static priorities are set at the beginning and remain constant, whereas dynamic priorities can change during the process lifetime, usually in response to the aging mechanism which prevents starvation of low priority processes. The scheduler then uses these priority levels to decide the order in which processes are given CPU time, with higher priority processes being chosen first.

Practice Questions

Describe how the operating system utilises both preemptive and cooperative multitasking to manage processes. Provide an advantage and a disadvantage for each type.

The operating system employs preemptive multitasking to allocate CPU time among processes by forcibly preempting the CPU from one process and giving it to another. This ensures a responsive system that can handle high-priority tasks quickly but can lead to the overhead of context switching. Cooperative multitasking, on the other hand, relies on processes to relinquish control of the CPU voluntarily, which minimises context switching overhead but can result in unresponsive systems if a process does not yield the CPU.

Explain the concept of paging in virtual memory management and discuss one advantage and one disadvantage of using a paging system.

Paging in virtual memory management involves dividing the computer's memory into fixed-size blocks and managing memory access at the level of these blocks, called pages. An advantage of paging is that it eliminates external fragmentation, as memory is allocated in fixed-size chunks. This makes memory allocation more efficient and simplifies the memory management process. However, a disadvantage is that it can lead to internal fragmentation, where the allocated memory may slightly exceed the program’s actual needs, resulting in wasted space within each page.

Hire a tutor

Please fill out the form and we'll find a tutor for you.

1/2
About yourself
Alternatively contact us via
WhatsApp, Phone Call, or Email