Skip to content

Instantly share code, notes, and snippets.

@onyb
Last active January 17, 2022 19:54
Show Gist options
  • Star 1 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save onyb/6fd737f773d06518600d to your computer and use it in GitHub Desktop.
Save onyb/6fd737f773d06518600d to your computer and use it in GitHub Desktop.
Operating Systems (Third Edition): Chapter 9
Operating Systems (Third Edition)
Harvey M. Deitel, Paul J. Deitel, David R. Choffnes
-------------------------------------------------------------------------------
CHAPTER 9: Real Memory Organization
-------------------------------------------------------------------------------
# Memory Hierarchy
- Main memory
* Should store currently needed program instructions and data only
- Secondary storage
* Stores data and programs that are not actively needed
- Cache memory
* Extremely high speed.
* Very expensive, so only small cache memories are used.
* On modern systems, cache is located on the processor itself.
* Most frequently used data copied to cache for faster access.
* Programs in main memory moved to the cache before being executed.
Executing programs from cache is faster than from main memory.
* Temporal locality: Frequently used data and instructions are likely
to be accessed again in future.
- Modern systems include hardware units called memory controllers that
perform memory transfer operations with virtually no computational
overhead.
# Memory Management Strategies
- Memory manager is an operating system component which implements schemes
to optimize memory management.
- Fetch strategies: Determines when to load the next piece of a program or
data to main memory from secondary storage.
* Demand fetch strategy: System loads the next piece of program or data
in memory when a running program references it.
* Anticipatory fetch strategy: Attempt to load a piece of program or
data before it is referenced.
- Placement strategies: Determine where in main memory the system should
place the incoming program or data.
* first-fit
* best-fit
* worst-fit
- Replacement strategies: Determine which data to remove from main memory
to make more space.
# Contiguous vs Non-contiguous Memory Allocation
- Contiguous allocation
* Program must exist as a single block of contiguous addresses
* Sometimes it is impossible to find a block large enough
* Low overhead
- Noncontiguous allocation
* Program divided into chunks called segments
* Each segment can be placed in different part of memory
* Easier to find blocks in which a segment may fit
* Increased number of processes that can occupy main memory at once
# Single-User Contiguous Memory Allocation
- Only one user allowed to access a machine. Sharing of resources was not
needed.
- Input-Output Control Systems (IOCS)
* Libraries of prewritten code to manage I/O devices
* Precursor to operating systems
- Overlays: Programming technique to overcome contiguous allocation limits
* Program divided into logical sections
* When a program does not need the memory for one section, the system
can replace some or all of it with the memory of a needed section.
* Enabled programmers to write programs larger than real memory
* Increased program complexity, size of programs, and cost of software
development.
* Virtual memory overcomes this problem. Like IOCS, virtual memory
shields programmers from complex issues such as memory management.
- Operating system must not be damaged by processes.
- Boundary register
* Contains the memory address at which user's program begins.
* Any memory accesses outside boundary are denied
* Can be modified only by a privileged instruction.
* Applications can access OS memory to execute OS procedures known as
system calls or supervisor calls.
* When a process issues a system call, the system switches from user
mode to kernel mode (executive mode) of execution.
# Fixed-Partition Multiprogramming
- To take maximum advantage of multiprogramming, several processes must
reside in the main memory at the same time.
- I/O requests can tie up a processor for long periods.
- I/O waits are usually much longer relative to processor utilization time.
- Process not actively using a processor (eg, waiting for I/O completion)
should relinquish it to others.
- In fixed-partition multiprogramming, each active process receives a
fixed-size block of memory.
- Processor rapidly switches between each process.
- In early implementations, the OS used absolute assemblers and loaders.
Processes were loaded according to a predetermined absolute address.
This resulted in wastage of memory since processes could run only in a
specific partition. If a job was ready to run and the program's partition
was occupied, then that job had to wait, even if other partitions were
available.
- To overcome the above problem of memory waste, developers created
relocating compilers, assemblers, and loaders. A relocatable program can
run in any available partition large enough for that program.
- In a single-user system, the system must protect only the OS from the
user process. In a multiprogramming system, the system must protect the
OS from all user processes, and protect each process from all others.
- The above is achieved by delimiting partitions with two boundary register
called base (low) and limit (high) registers.
- This memory organizaton scheme suffers from internal fragmentation, which
occurs when the size of a process's memory and data is smaller than that
of the partition in which the process executes.
# Variable-Partition Multiprogramming
- System progresses through the job queue and places each job in memory, at
which point it becomes a process.
- No space wasted initially. Partitions are exactly the size they need to
be. Hence internal fragmentation is impossible.
- External fragmentation can occur when processes are removed. The sum of
holes may be enough to accomodate another process, but the size of each
hole is too small to accomodate any available process.
- Ways to combat external fragmentation:
* Coalescing
> Merge adjacent free blocks into one large block.
> Even as the OS coalesces holes, space occupied by the separate
holes distributed throughout the main memory may be significant.
* Compaction
> Sometimes called garbage collection (not to be confused with GC
in object-oriented languages).
> Rearranges memory into a single contiguous block of free space
and a single contiguous block of occupied space.
> Significant overhead due to consumption of system resources. The
system must also cease all other computation during compaction,
which can be devastating for real-time systems.
# Memory Placement Strategies
- Determine the hole where data must be loaded in variable-partitioned
multiprogramming system.
- First-fit strategy
* Process placed in first hole of sufficient size found.
* Simple with low execution-time overhead.
* May operate slowly if the holes that are too small to hold the
incoming job are at the front of the free memory list.
- Best-fit strategy
* Process placed in hole that leaves least unused space around it.
* May create very small and unusable holes if the fit is not perfect.
* More execution-time overhead due to searching for the best hole.
* Need to maintain the entries of free memory list in sorted order,
which is computationally expensive.
- Worst-fit strategy
* Process placed in hole that leaves most unused space around it.
* Leaves another large hole, making it more likely that another process
can fit in the hole.
* Like best-fit, worst-fit strategy requires the overhead of finding
the largest hole.
# Multiprogramming with Memory Swapping
- Not necessary to keep inactive processes in memory.
- Only put currently running process in main memory, while others are
temporarily moved to secondary storage.
- Maximizes available main memory.
- Significant overhead when switching processes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment