Giresun University Operating Systems Midterm Exam (2022-2023)

Detailed answers for Computer Science students

1. Which of the following memory management techniques does not involve dynamic memory allocation?

Options:

Answer: d) Fixed partitioning

Explanation: Fixed partitioning divides physical memory into static, predetermined partitions at system startup, and each partition is allocated to a process without resizing during execution. This lacks dynamic allocation, as memory is pre-allocated. In contrast, paging dynamically allocates fixed-size pages, segmentation allocates variable-sized segments, and virtual memory dynamically manages memory using paging or swapping. Fixed partitioning is rigid, often leading to internal fragmentation.

Key Concepts: Fixed partitioning, dynamic allocation, paging, segmentation, virtual memory.

2. Which of the following scheduling algorithms can result in starvation?

Options:

Answer: b) Priority scheduling

Explanation: Starvation occurs when a process is perpetually denied CPU time because higher-priority processes keep arriving. Priority scheduling can cause starvation for low-priority processes if high-priority processes dominate. Round-robin ensures fair time slices, preventing starvation. First-come, first-served (FCFS) and Shortest job first (SJF) may cause delays but typically don’t lead to indefinite starvation, as all processes eventually run.

Key Concepts: Starvation, priority scheduling, fairness, scheduling algorithms.

3. Which of the following scheduling algorithms can lead to the convoy effect?

Options:

Answer: c) First-come, first-served

Explanation: The convoy effect occurs when a long-running process holds the CPU, causing shorter processes to wait, increasing average waiting time. First-come, first-served (FCFS) executes processes in arrival order, so a CPU-intensive process can block others, creating a "convoy." Round-robin avoids this with time slices, Priority scheduling prioritizes based on importance, and Shortest job first favors short processes, reducing the convoy effect.

Key Concepts: Convoy effect, FCFS, scheduling, waiting time.

4. Which of the following is not a primary function of an operating system?

Options:

Answer: c) User interface design

Explanation: An operating system’s primary functions include memory management (allocating memory), input/output management (handling devices), and process management (scheduling and synchronization). User interface design is typically handled by applications or desktop environments (e.g., GNOME, Windows Explorer), not the OS core, though some OSes provide basic UI components.

Key Concepts: OS functions, memory management, I/O management, process management.

5. Which of the following is not a type of system call?

Options:

Answer: d) Data encryption

Explanation: System calls are interfaces for user programs to request OS services, including file operations (e.g., open, read), memory allocation (e.g., malloc), and I/O device operations (e.g., ioctl). Data encryption is typically handled by libraries or applications, not as a direct system call, though some OSes may provide encryption-related services indirectly.

Key Concepts: System calls, file operations, memory allocation, I/O operations.

6. Which of Ascending-descending (blocked) is not a process state?

Options:

Answer: d) Active

Explanation: A process can be in states like Running (executing on CPU), Ready (waiting for CPU), or Blocked (waiting for an event, e.g., I/O). Active is not a standard process state in OS terminology; it’s a vague term sometimes used informally but not part of the process state model.

Key Concepts: Process states, running, ready, blocked.

7. Which of the following is not a type of process scheduling algorithm?

Options:

Answer: d) Concurrent scheduling

Explanation: Round-robin, Priority scheduling, and FCFS are standard scheduling algorithms. Concurrent scheduling is not a recognized scheduling algorithm; concurrency refers to simultaneous execution, not a specific scheduling method.

Key Concepts: Scheduling algorithms, round-robin, priority scheduling, FCFS.

8. Which of the following is not a benefit of using threads within a process?

Options:

Answer: d) Increased reliability due to reduced dependency on external processes

Explanation: Threads within a process enable parallel execution (improving performance), better resource utilization (sharing memory), and easier communication (via shared memory). Increased reliability is not a direct benefit, as threads share the same address space, and a thread failure can crash the entire process, not reducing dependency on external processes.

Key Concepts: Threads, parallel execution, resource utilization, communication.

9. Which of the following is not a common metric used to evaluate scheduling algorithms?

Options:

Answer: d) Disk I/O throughput

Explanation: Scheduling algorithms are evaluated using metrics like average waiting time (time spent waiting for CPU), average turnaround time (total time from submission to completion), and processor utilization (CPU usage efficiency). Disk I/O throughput measures storage performance, not CPU scheduling effectiveness.

Key Concepts: Scheduling metrics, waiting time, turnaround time, processor utilization.

10. Which of the following is not a common issue related to concurrent execution?

Options:

Answer: d) Monopolies

Explanation: Concurrent execution issues include deadlocks (processes blocked waiting for resources), starvation (processes denied resources), and livelocks (processes stuck in a loop of state changes). Monopolies is not a standard term in concurrency; it may refer to resource hogging but isn’t a recognized issue.

Key Concepts: Concurrency issues, deadlocks, starvation, livelocks.

11. Which of the following is not a benefit of using thread pools?

Options:

Answer: d) Increased flexibility due to dynamic thread creation

Explanation: Thread pools reuse a fixed set of threads, reducing overhead (improved performance), optimizing resource use (improved utilization), and simplifying management (easier management). Dynamic thread creation is not a benefit, as thread pools limit creation to a predefined size, sacrificing flexibility for efficiency.

Key Concepts: Thread pools, overhead, resource utilization, thread management.

12. Which of the following is not a commonly used synchronization mechanism in operating systems?

Options:

Answer: d) Shared memory

Explanation: Semaphores, mutexes, and condition variables are synchronization mechanisms used to coordinate access to shared resources. Shared memory is a mechanism for inter-process communication, not synchronization, as it requires additional synchronization tools to manage access.

Key Concepts: Synchronization, semaphores, mutexes, condition variables.

13. Which of the following is not a commonly used synchronization primitive in operating systems?

Options:

Answer: d) Registers

Explanation: Semaphores, mutexes, and monitors are synchronization primitives for coordinating shared resource access. Registers are CPU hardware components for data storage, not used for synchronization.

Key Concepts: Synchronization primitives, semaphores, mutexes, monitors.

14. Which of the following is a commonly used technique to prevent deadlocks?

Options:

Answer: None (Correct answer missing; likely meant to include Banker’s Algorithm)

Explanation: The options listed do not include a standard deadlock prevention technique. Resource allocation graph is used for detection, not prevention. Priority inversion is a problem, not a solution. Spinning and busy waiting are inefficient synchronization methods. A correct option would be the Banker’s Algorithm, which prevents deadlocks by ensuring safe resource allocation states.

Key Concepts: Deadlock prevention, Banker’s Algorithm, resource allocation.

15. Which of the following is not a condition for a deadlock to occur?

Options:

Answer: d) Unlimited resources

Explanation: Deadlocks require four conditions: mutual exclusion (resources held exclusively), hold and wait (processes holding resources while waiting), no preemption (resources cannot be forcibly taken), and circular wait (processes form a cycle). Unlimited resources would prevent deadlocks, as resource contention wouldn’t occur.

Key Concepts: Deadlock conditions, mutual exclusion, hold and wait, no preemption.

16. Which of the following is not an advantage of using a microkernel architecture?

Options:

Answer: b) More efficient inter-process communication (IPC)

Explanation: Microkernel architectures offer modularity (easier to develop and maintain), better security/reliability (isolated components), and potentially efficient memory management. However, IPC is less efficient due to message passing between user-space services, unlike monolithic kernels’ direct function calls.

Key Concepts: Microkernel, modularity, IPC, security.

17. Which of the following is not a characteristic of a monolithic kernel?

Options:

Answer: c) Kernel is modular and easily extensible

Explanation: A monolithic kernel runs all functions in a single address space (a), is loaded as a single binary (b), and provides services directly (d). However, it is not inherently modular or easily extensible, as its tightly coupled design makes modifications complex compared to microkernels.

Key Concepts: Monolithic kernel, address space, modularity.

18. Which of the following is not a component of the Process Control Block (PCB)?

Options:

Answer: c) Memory allocation

Explanation: The PCB contains process state (e.g., running), process ID, and CPU scheduling information (e.g., priority). Memory allocation is managed by the OS’s memory management system, not stored directly in the PCB, though the PCB may reference memory-related data (e.g., page tables).

Key Concepts: Process Control Block, process state, process ID, scheduling.

19. Which of the following is not a process state in the process life cycle?

Options:

Answer: c) Suspended

Explanation: Standard process states include Running, Blocked, Ready, New, and Terminated. Suspended is sometimes used in specific OSes (e.g., swapped out to disk), but it’s not a universal state in the standard process life cycle model.

Key Concepts: Process life cycle, process states, running, blocked, terminated.

20. Which of the following is not a type of inter-process communication (IPC) mechanism?

Options:

Answer: d) Interrupts

Explanation: IPC mechanisms include shared memory (processes share a memory region), message passing (data exchange via messages), and pipes (unidirectional data channels). Interrupts are hardware signals for event handling, not an IPC mechanism.

Key Concepts: IPC, shared memory, message passing, pipes.

21. Which of the following is not a common thread-related issue?

Options:

Answer: d) Concurrent modification

Explanation: Thread issues include deadlock (threads blocked waiting), live lock (threads stuck in a loop), and race condition (unpredictable outcomes from unsynchronized access). Concurrent modification is a specific issue in some contexts (e.g., Java collections), but it’s not a standard thread issue term; it’s a consequence of race conditions.

Key Concepts: Thread issues, deadlock, live lock, race condition.

22. Which of the following is true about device drivers?

Options:

Answer: a) They are part of the operating system kernel

Explanation: Device drivers are kernel components that interface between the OS and hardware devices, running in kernel mode for direct hardware access. They don’t run in user mode, aren’t used for IPC, and don’t manage process life cycles.

Key Concepts: Device drivers, kernel mode, hardware interface.

23. What is the fundamental difference between a process and a thread?

Options:

Answer: a) A process has its own memory space, while threads share the same memory space

Explanation: A process has its own address space, while threads within a process share the same memory and resources, differing only in their execution context (stack, registers). Option b is incorrect, as both can run on multiple processors. Option c is true but not the fundamental difference. Option d is false, as neither directly accesses hardware.

Key Concepts: Process, thread, memory space, resource sharing.

24. Which scheduling algorithm is used in real-time operating systems?

Options:

Answer: d) Earliest Deadline First

Explanation: Earliest Deadline First (EDF) is used in real-time OSes, prioritizing tasks with the closest deadlines to ensure timely execution, critical for real-time constraints. Round-robin, FCFS, and SJF are not deadline-aware, making them unsuitable for real-time systems.

Key Concepts: Real-time scheduling, EDF, deadlines.

25. Which of the following is true about interrupt handlers?

Options:

Answer: a) They are part of the operating system kernel

Explanation: Interrupt handlers are kernel components that process hardware interrupts, running in kernel mode. They can be interrupted (option c is false), don’t run in user mode, and aren’t used for IPC.

Key Concepts: Interrupt handlers, kernel mode, hardware interrupts.

26. What is the difference between mutexes and semaphores?

Options:

Answer: b) Mutexes are binary locks, while semaphores are integer counters

Explanation: Mutexes are binary locks (locked/unlocked) for mutual exclusion, ensuring one thread accesses a resource. Semaphores are counters, allowing multiple threads (if counting semaphore) or signaling (if binary). Option a reverses roles. Option c is partially true but not the key difference. Option d depends on implementation, not a defining trait.

Key Concepts: Mutexes, semaphores, mutual exclusion, signaling.

27. In a preemptive scheduling algorithm, when does context switching occur?

Options:

Answer: c) When a higher-priority process becomes ready

Explanation: In preemptive scheduling, the OS interrupts a running process to switch to a higher-priority process that becomes ready (e.g., from blocked to ready). Voluntary yielding is non-preemptive. Blocking for I/O causes a switch but isn’t priority-driven. Lower-priority processes don’t preempt higher ones.

Key Concepts: Preemptive scheduling, context switching, priority.

28. Which of the following is true about process scheduling?

Options:

Answer: b) It determines which process will use the processor next

Explanation: Process scheduling selects the next process to run on the CPU based on algorithms like Round-robin or Priority scheduling. Memory allocation, IPC, and I/O operations are handled by other OS components.

Key Concepts: Process scheduling, CPU allocation, scheduling algorithms.

29. Which of the following is true about process synchronization?

Options:

Answer: d) It is used to coordinate access to shared resources

Explanation: Process synchronization ensures orderly access to shared resources (e.g., using mutexes or semaphores) to prevent issues like race conditions. Process life cycle, memory allocation, and IPC are separate OS functions.

Key Concepts: Process synchronization, shared resources, race conditions.