Last active
July 26, 2017 17:56
-
-
Save ajaynitt/f190a78cd39473af9b8e to your computer and use it in GitHub Desktop.
OS Concept
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Main memory and the registers built into the processor itself are the only | |
storage that the CPU can access directly | |
Registers that are built into the CPU are generally accessible within one | |
cycle of the CPU clock | |
A base and a limit register define a logical address space | |
We can provide this protection by using two registers, usually | |
a base and a limit | |
The base register is now called a relocation register. | |
The value in the relocation register is added to every address generated by a | |
user process at the time it is sent to memory | |
The memory-mapping | |
hardware converts logical addresses into physical addresses. | |
The run-time mapping from virtual to physical addresses is done by a | |
hardware device called the memory-management unit (MMU). | |
For efficient CPU utilization, we want the execution time for each process | |
to be long relative to the swap time | |
The relocation register contains the value of the | |
smallest physical address; the limit register contains the range of logical | |
addresses | |
VIMU maps the logical address dynamically by adding the value in the relocation | |
register | |
When the CPU scheduler selects a process for execution, the dispatcher | |
loads the relocation and limit registers with the correct values as part of the | |
context switch. Because every address generated by the CPU is checked against | |
these registers, we can protect both the operating system and the other users' | |
programs and data from being modified by this running process. | |
If a device driver (or other operating-system service) | |
is not commonly used, we do not want to keep the code and data in memory, as | |
we might be able to use that space for other purposes. Such code is sometimes | |
called transient operating-system code; it comes and goes as needed | |
given N allocated blocks, another 0.5 N blocks will be lost to fragmentation. | |
That is, one-third of memory may be unusable! This property is known as the | |
50-percent rule | |
The advantage of using a thread group over using a process group is that context switching between threads is much
faster than context switching between processes (context switching means that the system switches from running one
thread or process, to running another thread or process).
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
A Thread Group is a set of threads all executing inside the same process. They all share the same
memory, and thus can access the same global variables, same heap memory, same set of file descriptors, etc. All these
threads execute in parallel (i.e. using time slices, or if the system has several processors, then really in parallel).