CST 334 - Week 5 Midterms
1.
Topics
Covered in Class
· Concurrency
· Threads
· Single-threaded vs. multi-threaded
processes
· Thread state and thread control
blocks
· Thread stacks and shared address
space
· Thread creation
· Thread completion
· Race conditions
· Shared data
· Locks
· Critical sections
· Interrupt disabling
· Spin locks
· Test-and-set / atomic operations
· Lock fairness and performance
· Lock-based concurrent data
structures
· Concurrent counters
· Scalable vs. non-scalable data
structures
2. Explanation of Each Topic
· Concurrency - Concurrency refers to multiple
units of execution happening at the same time or appearing to happen at the
same time. The OS creates the illusion of multiple tasks progressing
simultaneously, even on a single CPU.
· Threads - A thread is a smaller unit of
execution within a process. Threads share the same address space but each has
its own program counter, registers, and stack.
· Single-Threaded vs. Multi-Threaded
Processes - A
single-threaded process has one execution path, while a multi-threaded process
has multiple threads running independently. Multi-threaded processes can
perform multiple tasks at once.
· Thread Control Blocks (TCBs) - Like a process control block for
processes, a thread control block stores a thread’s execution state, including
registers and program counter, allowing the OS to switch between threads.
· Thread Stacks and Shared Address
Space - Each
thread has its own stack for local variables and function calls, but all
threads share the same heap and data segment. This makes data sharing easy but
also dangerous.
· Thread Creation - Threads are created using pthread_create,
which starts a new function executing concurrently with the main thread.
Execution order is determined by the OS scheduler.
· Thread Completion - pthread_join is used to wait for a
thread to finish before continuing execution. This ensures the parent thread
does not exit before child threads complete.
· Race Conditions - A race condition occurs when
multiple threads access shared data and the final result depends on the timing
of execution. This leads to unpredictable and incorrect behavior.
· Shared Data - Shared data exists when threads
access the same memory location. Without coordination, shared data can easily
become corrupted.
· Critical Sections - A critical section is a part of the
code where shared data is accessed. Only one thread should be allowed inside a
critical section at a time.
· Locks - Locks ensure mutual exclusion by
allowing only one thread to enter a critical section. Other threads must wait
until the lock is released.
· Interrupt Disabling - One early synchronization technique
disables interrupts to prevent context switches. While simple, it is unsafe and
impractical for modern multiprocessor systems.
· Spin Locks - Spin locks cause threads to
continuously check if a lock is available. They work well on multiprocessor
systems but waste CPU cycles on single-CPU systems.
· Test-and-Set / Atomic Instructions
- Atomic hardware
instructions allow a value to be tested and updated in one step. These are used
to safely implement locks without race conditions.
· Lock Fairness and Performance - Not all locks are fair, some
threads may starve while waiting. Performance depends on contention, CPU count,
and how long locks are held.
· Lock-Based Concurrent Data
Structures - Data
structures become thread-safe by adding locks, but poor locking strategies can
greatly reduce performance and scalability.
· Concurrent Counters - A simple shared counter protected
by a single lock is correct but does not scale. Multiple threads cause heavy
contention and slow performance.
· Scalable vs. Non-Scalable
Structures - Scalable
structures reduce contention by spreading work across CPUs, often using local
data with occasional global updates. This improves performance on multicore
systems.
3. Identify least-understood topics —
of these topics, which was the hardest for you to write the descriptions?
· This week, I did not feel that
there were any topics that I completely misunderstood. While some concepts required
a little more thought, none of them felt confusing in the way earlier
memory-management topics had. The material was built logically, which made it
easier to follow and describe.
4. Explain the nature of your
confusion — for these difficult topics, what pieces of them make sense and what
pieces are difficult to describe?
· This week, I did not experience
confusion really. Instead, the challenge came from clearly explaining how
concurrency issues actually happen at runtime. Concepts like shared data and
race conditions made sense, but describing how small timing differences between
threads can create incorrect behavior required slowing down and thinking
carefully through examples. Understanding how locks protect critical sections
was straightforward, but explaining why poor lock placement can hurt
performance took more thought. Overall, the difficulty was more about explaining.
5. Identify “aha” moments — which
topic did you find it easiest to write about, and why did it make sense?
· My aha moment came from learning
about race conditions and locks, especially when seeing how a simple data
structure like a counter can fail without proper synchronization. Understanding
that correctness and performance are both affected by how locks are used made
this topic easy to write about. It was interesting to see how adding too much
locking can solve correctness issues but harm scalability.
6. Ask questions — what do you think
will come next? Are there any gaps that don’t seem to be explained yet?
· I am curious about how modern
operating systems decide when to use simple locking mechanisms and when to use
more advanced synchronization techniques. I would also like to learn more about
how concurrency issues are handled at an even lower level, such as within
kernels or hardware, and how operating systems balance correctness with
performance in highly concurrent environments.
7. What connections did you see to
other classes and applications — have you used/heard/thought about these topics
before?
· These topics connect strongly to
prior programming and systems courses, especially when working with
multi-threaded applications. Race conditions, shared memory, and
synchronization are issues that appear in real applications such as web
servers, databases, and operating systems. This week’s material helped clarify
why concurrency bugs are often difficult to detect and why careful design is
necessary when multiple threads share data.
Comments
Post a Comment