CST 334 - Week 4
1.
Topics
we covered in class
· Free-space management
· External and internal fragmentation
· Free lists
· Splitting and coalescing
· Memory allocation using malloc and free
· Translation Lookaside Buffer (TLB)
· TLB hits and misses
· Smaller page tables
· Multi-level page tables
· Swap space
· Page faults
· Page replacement policies
· Cache management
· Optimal replacement policy
· FIFO replacement
2.
Explanation of each topic
· Free-Space Management - Free-space management focuses on
how the operating system or memory allocator keeps track of unused memory so it
can be efficiently reused when programs request more space. This becomes more
difficult when memory blocks are different sizes rather than fixed-size pages.
· External Fragmentation - External fragmentation occurs
when free memory exists but is split into small, non-contiguous chunks, making
it impossible to satisfy larger allocation requests even though enough total
memory is available.
· Internal Fragmentation - Internal fragmentation happens
when a program is given more memory than it actually needs, and the extra space
inside the allocated block goes unused. This often occurs with fixed-size
allocation units.
· Free Lists - A free list is a data structure
used to track all available pieces of memory in the heap. The allocator
searches this list when deciding where to place new memory requests.
· Splitting - Splitting occurs when a large
free block is broken into two pieces: one piece is allocated to the program,
and the remainder stays on the free list as available memory.
· Coalescing- Coalescing merges adjacent free
blocks back into a single larger block when memory is freed, reducing
fragmentation and improving the chances of satisfying future large requests.
· Memory allocation using malloc and
free - malloc
requests memory at runtime from the heap, while free returns that memory when
it is no longer needed. This flexibility allows programs to manage
variable-sized data structures.
· Translation Lookaside Buffer (TLB) - The TLB is a small, fast
hardware cache inside the MMU that stores recent virtual-to-physical address
translations, allowing the CPU to avoid expensive page table lookups.
· TLB Hits and Misses - A TLB hit means the address
translation is found quickly in the TLB, improving performance. A TLB miss
requires consulting the page table, which slows execution.
· Smaller Page Tables - Because linear page tables take
up a large amount of memory, systems use techniques like larger page sizes or
multi-level page tables to reduce space overhead.
· Multi-Level Page Tables - Multi-level page tables break a
large page table into smaller pieces and only allocate entries for parts of the
address space that are actually used, greatly reducing memory waste.
· Swap Space - Swap space is area on disk used
to store memory pages that do not currently fit in physical RAM. It allows the
system to support programs that use more memory than is physically available.
· Page Faults - A page fault occurs when a
process accesses a page that is not currently in physical memory, causing the
OS to load it from disk before allowing execution to continue.
· Page Replacement Policies - When memory is full, the OS must
decide which page to evict. Replacement policies guide this decision in order
to minimize expensive disk accesses.
· Cache Management - Physical memory acts like a
cache for disk-resident pages, and good cache management aims to maximize page
hits while minimizing page faults.
· Optimal Replacement Policy - The optimal policy evicts the
page that will not be used for the longest time in the future. While impossible
to implement in practice, it serves as a benchmark for comparing real policies.
· FIFO Replacement - FIFO removes the page that has
been in memory the longest, regardless of how often it is used. While simple,
it can perform poorly and even show counterintuitive behavior.
3.
Identify
least-understood topics -- of these topics, which was the hardest for you to
write the descriptions?
·
The
least understood topic for me this week was Belady’s optimal page replacement
policy. While the policy itself is straightforward in theory, fully
understanding how it makes eviction decisions required thinking ahead in ways
that real systems cannot. Tracing future memory accesses and comparing them
across pages was challenging at first, especially when trying to apply the
policy consistently across examples.
- Explain the nature of your
confusion -- for these difficult topics, what pieces of them make sense
and what pieces are difficult to describe?
- What made Belady’s policy
confusing was that it relies on perfect knowledge of future page
references. I understood the idea that evicting the page used furthest in
the future minimizes page faults, but it becomes more difficult to implement
when looking at long access sequences. Comparing future access times
across multiple pages and keeping track of which page would be needed,
requires careful analysis. The ones in the lab were not impossible, but I
feel the more pages that can be cached the more complicated this policy becomes
to calculate.
- Identify "aha" moments
-- which topic did you find it easiest to write about, and why did it make
sense?
- My “aha” moment came when I was
practicing the calculations for the different policies in the virtualization
lab. Actually, doing it myself rather than watching was extremely helpful
and I could better understand how to do the calculations. I have a little
trouble with the Belady’s policy but the more I worked on it the better I
understood it. I know this could potentially get more complex in the
future but I think with more and more practice I could do these in my
sleep.
- Ask questions -- what do you
think will come next? Are there any gaps that don't seem to be explained
yet?
- We are halfway through the
course now, and I feel like I already know a lot more about how operating
systems work. So far we have been building the steps to how operating
systems work now, so I think we will keep building steps to learn mostly
how operating systems nowadays handle all of the processes they run, because
as mentioned in one of the lectures you had over 90,000 processes running
and so far I do not believe what we know now could handle 90,000
processes.
7.
What connections did you see to other classes and
applications -- have you used/heard/thought about these topics before?
·
I’ve
encountered page replacement concepts before in IT and systems courses. This
material helped me better understand why systems slow down when memory is full
and why simply adding more RAM can drastically improve performance. Page
replacement policies explain the behavior behind common experiences like
applications freezing or disk usage spiking under heavy memory pressure.

Comments
Post a Comment