Giresun University Operating Systems Final Exam (2022-2023)

Detailed answers for Computer Science students

1. How does an operating system enable multiple processes to run in parallel on a single-processor hardware? Explain briefly.

Answer: The operating system achieves this through time-sharing and multitasking. It allocates the CPU to different processes in short time slices (called time quantum) using a scheduling algorithm, such as Round-robin. The OS performs context switching to save the state of one process (registers, program counter, etc.) and load another, creating the illusion of parallelism. This is known as preemptive multitasking. For example, a process runs for a few milliseconds before the OS switches to another process, ensuring responsiveness and efficient CPU utilization. This approach balances foreground tasks (e.g., user interfaces) with background tasks (e.g., system services).

Key Concepts: Time-sharing, context switching, preemptive multitasking, scheduling algorithms.

2. How does an operating system allow multiple processes to use more memory than physically available? Explain briefly.

Answer: The operating system uses virtual memory to provide each process with a separate, abstract address space. Virtual addresses are mapped to physical memory via page tables, managed by the Memory Management Unit (MMU). When physical memory is insufficient, the OS employs paging, moving less-used pages to a disk-based swap space or page file. This process, called page swapping, allows processes to operate as if they have access to more memory than physically available. However, excessive swapping can lead to thrashing, where the system spends more time swapping pages than executing processes, degrading performance due to slower disk I/O compared to RAM.

Key Concepts: Virtual memory, paging, swap space, page tables, thrashing.

3. Which scheduling algorithm prioritizes processes based on their expected execution time?

Options:

Answer: b) Shortest Job Next

Explanation: The Shortest Job Next (SJN) algorithm, also known as Shortest Job First (SJF), selects the process with the shortest estimated execution time to run next. This minimizes the average waiting time for processes, as shorter jobs complete quickly, reducing the queue. However, SJN requires accurate estimation of execution times, which can be challenging in dynamic systems. In contrast, Round-robin allocates equal time slices to all processes, First-come, first-serve (FCFS) executes processes in arrival order, and Priority scheduling uses predefined priorities, which may not be based on execution time.

Key Concepts: Shortest Job Next, scheduling, average waiting time, execution time estimation.

4. Which of the following is an example of a block I/O device?

Options:

Answer: c) Hard Disk

Explanation: Block I/O devices handle data in fixed-size blocks, such as sectors on a hard disk or SSD. Hard disks are classic examples, as they read/write data in blocks (e.g., 512 bytes or 4 KB), making them suitable for file systems and storage management. In contrast, keyboards and mice are character stream devices, producing data as a stream of characters or events. Printers typically use serial I/O, sending data sequentially. Understanding the difference between block and character devices is crucial for OS device management.

Key Concepts: Block I/O, character stream devices, storage devices, device management.

5. Which of the following is a benefit of using memory-mapped files in an operating system?

Options:

Answer: a) Reduced disk I/O

Explanation: Memory-mapped files allow a file to be mapped directly into a process’s virtual address space, enabling the OS to treat file data as if it were in memory. This reduces the need for explicit disk I/O operations (e.g., read/write system calls), as the OS can load file data into memory on demand via paging. This improves performance, especially for large files. Reduced CPU usage is not a primary benefit, as CPU usage depends on the workload. Increased memory usage is a potential downside, not a benefit. Process isolation is unrelated to memory-mapped files.

Key Concepts: Memory-mapped files, virtual memory, disk I/O, paging.

6. Which of the following is a responsibility of memory management in an operating system?

Options:

Answer: d) All of the above

Explanation: Memory management in an OS is responsible for multiple tasks: allocating and deallocating physical memory to processes, ensuring efficient use of resources; monitoring physical memory usage to track which memory is in use or free; and protecting physical memory by using mechanisms like address space isolation and page table permissions to prevent unauthorized access. All these functions are critical for secure and efficient memory usage, making all of the above the correct choice.

Key Concepts: Memory management, allocation, monitoring, protection, address space isolation.

7. Which of the following is not a primary function of input/output (I/O) in an operating system?

Options:

Answer: c) Process scheduling and synchronization

Explanation: The primary functions of I/O in an OS include data transfer between devices and memory (e.g., reading from a disk), device management (e.g., configuring and controlling devices), and interrupt handling (e.g., responding to device events) along with error detection. Process scheduling and synchronization, however, are functions of the OS’s process management subsystem, not I/O. Scheduling determines which process runs next, and synchronization manages process coordination, which are separate from I/O operations.

Key Concepts: I/O management, device management, interrupt handling, process scheduling.

8. Which of the following is a disadvantage of using demand paging in an operating system?

Options:

Answer: b) Increased disk I/O

Explanation: Demand paging loads pages into memory only when they are needed, reducing memory usage. However, a key disadvantage is increased disk I/O, as pages must be fetched from disk when a page fault occurs. This can slow down performance, especially if page faults are frequent. Increased memory usage is not a disadvantage, as demand paging optimizes memory. Increased CPU usage may occur indirectly but is not the primary issue. Fragmentation is unrelated to demand paging, as it deals with memory allocation patterns.

Key Concepts: Demand paging, page fault, disk I/O, virtual memory.

9. Which of the following is not a benefit of using virtual memory in an operating system?

Options:

Answer: b) Protection of system resources

Explanation: Virtual memory provides several benefits: efficient memory usage by allowing paging and swapping, simplified memory allocation through abstract address spaces, and the ability to run larger applications by using disk as extended memory. However, protection of system resources is not a direct benefit of virtual memory. While virtual memory supports process isolation (protecting processes from each other), resource protection is a broader OS security function, not specific to virtual memory.

Key Concepts: Virtual memory, paging, process isolation, memory allocation.

10. Which of the following is a benefit of multithreading in operating systems?

Options:

Answer: a) Increased parallelism and improved performance

Explanation: Multithreading allows multiple threads within a process to execute concurrently, sharing the same memory and resources. This enables increased parallelism (e.g., on multi-core CPUs) and improved performance by utilizing CPU cycles more efficiently. Reduced system resources is partially true, as threads share memory, but it’s not the primary benefit. Simplified security management is unrelated, as multithreading can complicate security. Eliminating scheduling is incorrect, as threads still require scheduling.

Key Concepts: Multithreading, parallelism, performance, thread scheduling.

11. Which of the following is a disadvantage of Direct Memory Access (DMA)?

Options:

Answer: c) Higher latency

Explanation: Direct Memory Access (DMA) allows devices to transfer data directly to/from memory without CPU involvement, reducing CPU usage and enabling high I/O throughput. However, a disadvantage is higher latency in some cases, due to the setup time for DMA transfers and synchronization overhead. Low CPU usage and high I/O throughput are benefits, not disadvantages. None of the above is incorrect, as higher latency is a valid concern.

Key Concepts: DMA, I/O throughput, latency, CPU offloading.

12. Which of the following is not an advantage of asynchronous I/O?

Options:

Answer: c) Simplified programming

Explanation: Asynchronous I/O allows a process to continue executing while I/O operations are performed in the background, leading to increased efficiency and potentially reduced latency for the process. However, it does not simplify programming; asynchronous I/O often requires complex mechanisms like callbacks, promises, or event loops, making programming more challenging compared to synchronous I/O. None of the above is incorrect, as simplified programming is not an advantage.

Key Concepts: Asynchronous I/O, efficiency, latency, programming complexity.

13. Which of the following correctly defines a process in the context of operating systems?

Options:

Answer: a) A program in execution

Explanation: A process is a program in execution, encompassing the program code, data, stack, and execution state (e.g., registers, program counter). It is the active entity managed by the OS. A file on disk is static data, not a process. An I/O device is hardware, and a network protocol is a communication standard, none of which define a process.

Key Concepts: Process, program execution, process state, OS management.

14. Which of the following is a method used for deadlock detection?

Options:

Answer: a) Resource Allocation Graph

Explanation: A Resource Allocation Graph (RAG) is a graphical representation used to detect deadlocks by identifying cycles in resource requests and allocations. A cycle in the graph indicates a potential deadlock. Round-Robin Scheduling, LRU, and Shortest Job Next are scheduling or replacement algorithms, not related to deadlock detection.

Key Concepts: Deadlock detection, Resource Allocation Graph, cycles, resource management.

15. Which of the following is a synchronous I/O operation?

Options:

Answer: a) Polling

Explanation: Polling is a synchronous I/O operation where the CPU repeatedly checks a device’s status to determine if an I/O operation is complete, blocking other tasks. Interrupts and DMA are asynchronous, as they allow the CPU to perform other tasks while waiting for I/O completion, signaled by an interrupt or managed by a DMA controller. None of the above is incorrect, as polling is synchronous.

Key Concepts: Synchronous I/O, polling, interrupts, DMA.

16. Which of the following is not a common file system access control mechanism?

Options:

Answer: c) Password protection

Explanation: Common file system access control mechanisms include user-based permissions (e.g., owner/group/other in Unix), role-based access control (RBAC, assigning permissions to roles), and access control lists (ACLs, specifying permissions for specific users). Password protection is used for securing files or systems but is not a standard file system access control mechanism, as it typically applies to authentication, not file-level permissions.

Key Concepts: File system access control, user-based permissions, RBAC, ACLs.

17. Which of the following is not a common file system error correction mechanism?

Options:

Answer: d) Randomization

Explanation: Journaling records file system changes to ensure consistency after crashes, checksums detect data corruption, and backups restore data after errors. These are common error correction mechanisms. Randomization is not related to error correction; it may refer to techniques like address space layout randomization (ASLR) for security, not file system reliability.

Key Concepts: File system reliability, journaling, checksum, backups.

18. Which technique is used by operating systems to reclaim memory no longer needed by a process?

Options:

Answer: c) Garbage collection

Explanation: Garbage collection is a technique used to reclaim memory that a process no longer needs, automatically freeing unused memory objects (common in languages like Java). Paging manages memory allocation, swap moves pages to disk, and fragmentation is a problem where memory becomes inefficiently allocated, none of which directly reclaim unused memory.

Key Concepts: Garbage collection, memory reclamation, memory management.

19. What is a page fault in the context of memory management?

Options:

Answer: c) An error when an application accesses memory not loaded in physical memory

Explanation: A page fault occurs when a process attempts to access a virtual memory page that is not currently in physical memory (e.g., it’s in swap space or not yet loaded). The OS handles the fault by loading the required page, possibly evicting another. Insufficient memory may cause thrashing, not a page fault. Memory allocation is a separate process. None of the above is incorrect.

Key Concepts: Page fault, virtual memory, paging, swap space.

20. What is a Translation Lookaside Buffer (TLB) in the context of memory management?

Options:

Answer: d) A hardware component that caches frequently used virtual-to-physical address translations

Explanation: The Translation Lookaside Buffer (TLB) is a hardware cache in the CPU that stores recent virtual-to-physical address translations, speeding up memory access by reducing page table lookups. It’s not a protection mechanism, a data structure (though it holds data), or an error. TLB misses require slower page table access, impacting performance.

Key Concepts: TLB, address translation, virtual memory, performance.

21. What is thrashing in the context of memory management?

Options:

Answer: b) Frequent page faults

Explanation: Thrashing occurs when the OS spends excessive time swapping pages between physical memory and disk due to frequent page faults, often because too many processes are competing for limited memory. This degrades performance, as the CPU is underutilized. Excessive CPU time, memory leaks, and exceeding quotas are distinct issues unrelated to thrashing.

Key Concepts: Thrashing, page faults, virtual memory, performance degradation.

22. What is the purpose of directory entries in a file system?

Options:

Answer: b) To store metadata about a file

Explanation: Directory entries store metadata about files, such as file name, size, creation date, and pointers to the file’s data blocks. While they may include information about physical location (via inode pointers) and permissions, their primary purpose is to hold metadata. Actual file data is stored in separate data blocks on the disk.

Key Concepts: Directory entries, file metadata, file system structure.

23. What is the primary purpose of memory management in an operating system?

Options:

Answer: a) To ensure efficient and reliable execution of applications

Explanation: The primary purpose of memory management is to allocate, track, and protect memory to ensure applications run efficiently and reliably. This includes virtual memory, paging, and process isolation. Network management, power control, and encryption are handled by other OS components, not memory management.

Key Concepts: Memory management, efficient execution, process isolation.

24. What is the difference between a process and a thread?

Options:

Answer: b) A process is a collection of related threads, while a thread is a single execution unit

Explanation: A process is a program in execution, containing one or more threads, which are the smallest units of execution. Threads within a process share the same memory and resources but have separate stacks and registers. Option a reverses the definitions. Option c is incorrect, as processes and threads are distinct. Option d is wrong, as both are used in multitasking systems.

Key Concepts: Process, thread, multithreading, resource sharing.

25. What is the primary advantage of using a journaling file system?

Options:

Answer: c) Faster file system recovery after crashes

Explanation: A journaling file system logs changes before applying them, enabling faster recovery after crashes by replaying or undoing logged operations to restore consistency. Faster access times and compression are not primary benefits, as journaling may slightly increase overhead. File sharing is unrelated to journaling.

Key Concepts: Journaling, file system recovery, consistency.

26. Which scheduling algorithm aims to give each process an equal share of CPU time?

Options:

Answer: a) Round-robin

Explanation: Round-robin scheduling assigns each process a fixed time slice (quantum) in a cyclic order, ensuring equal CPU time sharing. Shortest Job Next prioritizes short tasks, FCFS follows arrival order, and Priority scheduling favors high-priority processes, none of which guarantee equal shares.

Key Concepts: Round-robin, time quantum, fair scheduling.

27. Which I/O technique allows a process to perform other tasks while waiting for I/O completion?

Options:

Answer: b) Interrupt-driven I/O

Explanation: Interrupt-driven I/O allows the CPU to perform other tasks while an I/O operation is in progress, with the device signaling completion via an interrupt. Polling ties up the CPU checking device status. DMA offloads I/O to a controller but is a separate mechanism. Programmed I/O requires CPU involvement, blocking other tasks.

Key Concepts: Interrupt-driven I/O, asynchronous I/O, CPU utilization.

28. Which I/O technique allows data transfer between devices and memory without CPU involvement?

Options:

Answer: c) Direct Memory Access (DMA)

Explanation: DMA enables direct data transfer between devices and memory via a DMA controller, bypassing the CPU to improve efficiency for large data transfers. Polling, Interrupt-driven I/O, and Programmed I/O all involve the CPU in data transfer.

Key Concepts: DMA, CPU offloading, I/O efficiency.

29. Which page replacement algorithm aims to minimize page faults by selecting the page unused for the longest time?

Options:

Answer: a) Least Recently Used (LRU)

Explanation: LRU replaces the page that has not been accessed for the longest time, assuming it’s least likely to be needed soon. This minimizes page faults but requires tracking page access history. FIFO replaces the oldest page, Optimal selects the page not needed furthest in the future, and Clock uses a reference bit for approximation.

Key Concepts: LRU, page replacement, page faults.

30. Which page replacement algorithm suffers from Belady’s anomaly, where increasing the number of page frames can increase page faults?

Options:

Answer: b) First-In, First-Out (FIFO)

Explanation: Belady’s anomaly occurs when adding more page frames increases page faults, counterintuitively. FIFO is prone to this because it replaces pages based on arrival order, not usage patterns, leading to poor decisions in some access sequences. LRU, Optimal, and Clock are less susceptible or immune to this anomaly.

Key Concepts: Belady’s anomaly, FIFO, page replacement.

31. Which technique allows applications to access a larger address space than physically available in a computer’s memory?

Options:

Answer: a) Virtual memory

Explanation: Virtual memory allows applications to use a larger address space by abstracting physical memory and using disk-based swap space. Paging and segmentation are mechanisms within virtual memory, but virtual memory is the overarching technique. Fragmentation is a memory management issue, not a solution.

Key Concepts: Virtual memory, address space, swap space.

32. Which scheduling algorithm assigns priorities to processes based on their characteristics or importance?

Options:

Answer: d) Priority scheduling

Explanation: Priority scheduling assigns priorities to processes based on factors like importance, resource needs, or deadlines, executing higher-priority processes first. Round-robin uses equal time slices, Shortest Job Next prioritizes short execution times, and FCFS follows arrival order.

Key Concepts: Priority scheduling, process prioritization.

33. What is the main disadvantage of the First-Come, First-Serve (FCFS) scheduling algorithm?

Options:

Answer: a) Long waiting times for short processes

Explanation: FCFS executes processes in arrival order, which can lead to the convoy effect: a long-running process delays subsequent short processes, causing long waiting times. Preference for CPU-bound processes is not inherent to FCFS. Non-preemptive nature is a limitation but not the main disadvantage. Complex calculations are not required, as FCFS is simple.

Key Concepts: FCFS, convoy effect, waiting time.

34. Which strategy can be used to prevent deadlocks?

Options:

Answer: b) Banker’s Algorithm

Explanation: The Banker’s Algorithm prevents deadlocks by ensuring resource allocation avoids unsafe states, checking if granting a resource request leads to a deadlock-prone state. Resource Allocation Graph detects deadlocks, rollback recovers from them, and preemption resolves them, but none prevent deadlocks proactively like the Banker’s Algorithm.

Key Concepts: Banker’s Algorithm, deadlock prevention, resource allocation.

35. Which page does the Optimal page replacement algorithm select for replacement?

Options:

Answer: a) The page that will cause the fewest page faults in the future

Explanation: The Optimal page replacement algorithm selects the page that will not be needed for the longest time in the future, minimizing page faults. It’s theoretical, as it requires future knowledge. Recent access or priority are not criteria. Not accessed recently describes LRU, not Optimal.

Key Concepts: Optimal page replacement, page faults, future prediction.

36. The Clock page replacement algorithm maintains a circular list of pages and uses which mechanism to determine which page to replace?

Options:

Answer: a) Reference bit

Explanation: The Clock algorithm uses a reference bit to track whether a page has been accessed. Pages are arranged in a circular list, and a pointer moves through them. If a page’s reference bit is 1 (recently used), it’s cleared, and the pointer moves on. If it’s 0, the page is replaced. Dirty bit tracks modifications, page size is irrelevant, and priority is not used.

Key Concepts: Clock algorithm, reference bit, page replacement.

37. Which mechanism is used in virtual memory to translate virtual addresses to physical addresses?

Options:

Answer: a) Page tables

Explanation: Page tables map virtual addresses to physical addresses in virtual memory systems, maintained by the OS and used by the MMU. Segmentation is an alternative memory management technique. Demand paging loads pages on demand, not translation. TLB caches translations for speed but is not the primary mechanism.

Key Concepts: Page tables, address translation, virtual memory.

38. Which technique is commonly used to manage memory sharing between multiple applications in virtual memory systems?

Options:

Answer: d) Copy-on-write

Explanation: Copy-on-write (COW) allows multiple processes to share the same memory pages until one attempts to modify a page, at which point a copy is made. This optimizes memory usage for shared libraries or forked processes. Demand paging and page replacement manage memory allocation, not sharing. Segmentation organizes memory but doesn’t address sharing directly.

Key Concepts: Copy-on-write, memory sharing, virtual memory.

39. What is the role of a page replacement algorithm?

Options:

Answer: c) To determine which memory pages to move to disk to free physical memory

Explanation: A page replacement algorithm selects which page in physical memory to evict (move to swap space) when a new page must be loaded and memory is full. This frees space for the incoming page. Allocation is handled by memory management, freeing is broader, and moving from disk occurs after replacement.

Key Concepts: Page replacement, swap space, memory management.

40. Which scheduling algorithm provides the minimum average waiting time for all processes?

Options:

Answer: d) Shortest Remaining Time First (SRTF)

Explanation: SRTF is a preemptive version of Shortest Job Next, always executing the process with the shortest remaining execution time. This minimizes average waiting time by prioritizing short tasks and preempting longer ones. Round-robin ensures fairness but not minimal waiting. SJN is non-preemptive, less optimal. FCFS can cause long waits due to the convoy effect.

Key Concepts: SRTF, average waiting time, preemptive scheduling.