CST 334 Week 6
Topics
Covered in Class
· Condition variables
· Waiting vs. spinning
· Signaling and waking threads
· Mesa vs. Hoare semantics
· Producer/Consumer
· Semaphores
· Binary semaphores
· Semaphores for ordering
· Barriers
· Reader–writer locks
· Deadlock
· Deadlock conditions
· Deadlock prevention
· Deadlock avoidance
· Deadlock detection and recovery
· Starvation
· Livelock
· Fairness and starvation freedom
· Non-deadlock concurrency bugs
· Atomicity violations
· Order violations
Explanation
of Each Topic
· Condition Variables - Condition variables allow a thread
to sleep until a specific condition becomes true. Instead of repeatedly
checking, the thread waits safely until another thread signals that it can
continue.
· Waiting vs. Spinning - Spinning wastes CPU cycles by
repeatedly checking a condition. Waiting puts the thread to sleep and resumes
it only when the condition changes, making it more efficient.
· Signaling and Waking Threads - Signaling is how a thread notifies
others that the shared state has changed. Proper signaling ensures that waiting
threads wake up at the correct time and avoid missed notifications.
· Mesa vs. Hoare Semantics - With Mesa semantics, a thread that
is signaled must re-check the condition because it may no longer be true. With Hoare
semantics, the condition is guaranteed to be true when the thread wakes up, but
this is less common in real systems.
· Producer/Consumer - This problem models coordination
between producers generating data and consumers using it. Synchronization
ensures producers don’t overfill the buffer and consumers don’t read empty
slots.
· Semaphores - Semaphores use a counter and two
atomic operations, wait and signal, to control access to shared resources or
coordinate execution between threads.
· Binary Semaphores - A binary semaphore only allows one
thread through at a time and behaves like a mutex, enforcing mutual exclusion
in critical sections.
· Semaphores for Ordering - Semaphores can enforce the order in
which threads execute, such as making sure one thread completes before another
proceeds.
· Barriers - A barrier forces threads to wait
until all participating threads reach the same execution point. Only once
everyone arrives does execution continue.
· Reader–Writer Locks - These locks allow multiple readers
to access shared data at the same time but require exclusive access for
writers. This improves performance when reads are much more common than writes.
· Deadlock - Deadlock happens when threads are
stuck waiting on each other’s resources and none can proceed, causing the
program to stop making progress.
· Deadlock Conditions - Deadlock requires four conditions -
mutual exclusion, hold-and-wait, no preemption, and circular wait. All must be
present for deadlock to occur.
· Deadlock Prevention - Deadlock can be prevented by
eliminating one of the required conditions, such as enforcing a strict lock
ordering or preventing hold-and-wait behavior.
· Deadlock Avoidance - Avoidance techniques attempt to
predict unsafe states and avoid them, though they are rarely used in practice
due to complexity.
· Deadlock Detection and Recovery - Some systems allow deadlocks to
occur and then detect them, recovering by terminating or restarting affected
threads.
· Starvation - Starvation occurs when a thread
never makes progress because other threads repeatedly acquire the needed
resources first.
· Livelock - In livelock, threads stay active
but continuously respond to each other in a way that prevents progress, even
though they are not blocked.
· Fairness and Starvation Freedom - Fair scheduling ensures each thread
eventually makes progress. Some synchronization mechanisms guarantee starvation
freedom by enforcing access order.
· Non-Deadlock Concurrency Bugs - These bugs cause incorrect behavior
without blocking execution. They are often harder to debug because programs
continue running.
· Atomicity Violations - Atomicity violations occur when a
set of operations that should execute together is interrupted, leaving shared
data in an inconsistent state.
· Order Violations - Order violations happen when code
assumes one operation occurs before another, but no synchronization enforces
that order.
Identify least-understood topics — of these topics, which was the hardest
for you to write the descriptions?
This week, I did not feel that there were any topics that I
misunderstood. The material on condition variables, semaphores, and concurrency
bugs built naturally on previous concepts, which made it easier to follow.
While some topics were more detailed than others, none of them stood out as
particularly confusing.
Explain the nature of your confusion — for these difficult topics, what
pieces of them make sense and what pieces are difficult to describe?
This week, I did not experience confusion with the material. The topics
around condition variables, semaphores, and concurrency bugs were clearly
presented and built logically on earlier concepts. Overall, the material was very
understandable and manageable rather than difficult.
Identify “aha” moments — which topic did you find it easiest to write
about, and why did it make sense?
My “aha” moment came from the discussion of condition variables versus
spinning. Understanding that waiting puts a thread to sleep instead of wasting
CPU cycles made the importance of condition variables very clear. This concept
stood out because it directly connects correctness to performance and
efficiency in real systems.
Ask questions — what do you think will come next? Are there any gaps that
don’t seem to be explained yet?
I am interested in learning how modern operating systems choose between
different synchronization primitives in practice. While the concepts make sense
individually, I am curious about how real systems balance fairness,
performance, and complexity when deciding whether to use locks, semaphores, or
condition variables.
What connections did you see to other classes and applications — have you
used/heard/thought about these topics before?
These concepts connect strongly to experiences in programming and
systems-related classes, especially when working with multi-threaded
applications. Problems like race conditions and deadlocks are common in
real-world software such as web servers, databases, and operating systems. This
week’s material helped reinforce why careful synchronization design is critical
in any concurrent system.
Comments
Post a Comment