CST 334 - Week 2 - I like this class

 

  1. Topics we covered in class 
    • Processes
    • Process Tree
    • PID (Process ID)
    • PPID (Parent Process ID)
    • Zombie Process
    • Fork, Wait, Exec
    • Multiprogramming
    • Approaches: Naive and Limited Direct Execution
    • Scheduling Policy
    • Turnaround Time
    • Response Time
    • Shortest Job First (SJF) Policy
    • First Come, First Serve (FCFS) Policy
    • Shortest Time to Completion First (STCF)
    • Round Robin
    • Multi-Level Feedback Queue (MLFQ)
    • CPU-Bound vs I/O-Bound Jobs
  2. Explanation of each topic
    • Processes – A process is a running instance of a program. It contains its own memory space, registers, and resources managed by the operating system.
    • Process Tree – A process tree shows how processes are related. Each process can create child processes, forming a parent-child hierarchy.
    • PID (Process ID) – Every process has a unique identifier called a PID. It helps the OS keep track of and manage each process individually.
    • PPID (Parent Process ID) – The PPID identifies the parent of a process. This links the process to the one that created it.
    • Zombie Process - A zombie process is one that has finished executing but still exists in the process table because its parent hasn’t read its exit status.
    • Fork, Wait, Exec - These are system calls used for process creation and management.
      1. Fork – fork() is used to create a new process. The new process is a child of the parent process. After the fork, both processes run the same code, but they have separate memory spaces. The return value helps distinguish them. The parent gets the child’s PID, while the child gets 0.
      2. Wait – wait() system call makes the parent process pause until its child processes finishes executing. This prevents the parent from running too far ahead and helps avoid zombie processes, since it allows the parent to collect the child’s exit status and free system resources.
      3. Exec - exec() family of calls replaces the current process with a new program. After a successful exec, the old code, data, and stack are replaced, but the process keeps the same PID.
    • Multiprogramming - Multiprogramming is the practice of keeping multiple processes in memory at once so the CPU always has something to execute. When one process is waiting for I/O, the CPU can switch to another ready process. This increases CPU utilization and overall system efficiency.
    • Naive and Limited Direct Execution Approaches
      1. Naive Direct Execution lets a program run directly on the hardware with full access, but it’s unsafe because the OS can lose all control.
      2. Limited Direct Execution solves this by running user programs directly on the CPU but under controlled supervision. The OS uses traps and system calls to manage privileged operations safely, ensuring protection.
    • Policy - A scheduling policy determines how the OS decides which process uses the CPU next. Policies balance between efficiency, fairness, and response time. Policies we learned about include FCFS, SJF, and Round Robin.
    • Turnaround Time - This is the total time from when a process starts to when it finishes.
    • Response Time - The time between submitting a process and receiving its first response or output.
    • Shortest Job First (SJF) - This scheduling policy runs the process with the shortest duration first to minimize average turnaround time.
    • First Come, First Serve (FCFS) - Processes are executed in the order they arrive. It’s simple but can cause long waits for short jobs if a long one arrives first.
    • Shortest Time to Completion First (STCF) – An improvement to SJF. When a new process is received the shortest process out of all of the processes is run.
    • Round Robin (RR) - Each process gets a small, fixed time slice to run before moving to the back of the queue.
    • Multi-Level Feedback Queue - A dynamic scheduling method where processes can move between multiple priority queues depending on their behavior.
    • CPU-Bound vs I/O-Bound Jobs - CPU-bound jobs spend most time doing calculations and need a lot of CPU time, while I/O-bound jobs frequently wait for input/output operations and need less CPU.
  3. Identify least-understood topics -- of these topics, which was the hardest for you to write the descriptions?
    • The least understood topic for me this week was how to calculate the round robin turnaround times. I understand the concept but initially I had trouble figuring out how the processes would be split evenly among time slots so to speak. I kind of understand the multi-level feedback topic but that was also a little confusing to me.
  4. Explain the nature of your confusion -- for these difficult topics, what pieces of them make sense and what pieces are difficult to describe?
    • With Round Robin I am mostly just confused on the math. I think the scheduling practice helped a lot to explain better but I cannot say I am 100% confident in my ability to calculate these especially with more complicated jobs. Multi-level feedback was a little confusing to me because I don’t really understand how the time allotments are created. Are they just arbitrary or does each CPU come up with like what seems like a reasonable number.
  5. Identify "aha" moments -- which topic did you find it easiest to write about, and why did it make sense?
    • The different scheduling policies. I loved doing the math on those. I could easily understand how they worked and how to calculate the response times and the turn around times. FIFO to me is the easiest because it’s just what came first how long did it take and what comes next. SJF and SJTC were a little more complicated but nowhere near as bad as Round Robin. All of these topics make sense but the math for Round Robin is kind of confusing.
  6. Ask questions -- what do you think will come next? Are there any gaps that don't seem to be explained yet?
    • I think we will work more with Limited Direct Execution and how Operating Systems have evolved into what they are now with newer CPUs because a lot of them are multi-threaded and do not necessarily still use the same scheduling policies as before.
  7. What connections did you see to other classes and applications -- have you used/heard/thought about these topics before?
    • I have not heard about a lot of these topics, with the exception of PID, I did not know there was a PPID. These processes and scheduling topics help me to better connect with my understanding of CPUs and how they manage work. The CPU is the central component that executes all processes, and scheduling policies determine how its time is divided among them. The concepts of turnaround time and response time show how CPU scheduling impacts performance. CPU-bound jobs rely on the processor’s speed, while I/O-bound jobs spend more time waiting for external devices. Even multiprogramming and multi-level feedback queues are about maximizing CPU utilization.

Comments

Popular Posts