Skip to content

Instantly share code, notes, and snippets.

@ajaynitt
Last active July 26, 2017 17:56
Show Gist options
  • Save ajaynitt/f190a78cd39473af9b8e to your computer and use it in GitHub Desktop.
Save ajaynitt/f190a78cd39473af9b8e to your computer and use it in GitHub Desktop.
OS Concept
Main memory and the registers built into the processor itself are the only
storage that the CPU can access directly
Registers that are built into the CPU are generally accessible within one
cycle of the CPU clock
A base and a limit register define a logical address space
We can provide this protection by using two registers, usually
a base and a limit
The base register is now called a relocation register.
The value in the relocation register is added to every address generated by a
user process at the time it is sent to memory
The memory-mapping
hardware converts logical addresses into physical addresses.
The run-time mapping from virtual to physical addresses is done by a
hardware device called the memory-management unit (MMU).
For efficient CPU utilization, we want the execution time for each process
to be long relative to the swap time
The relocation register contains the value of the
smallest physical address; the limit register contains the range of logical
addresses
VIMU maps the logical address dynamically by adding the value in the relocation
register
When the CPU scheduler selects a process for execution, the dispatcher
loads the relocation and limit registers with the correct values as part of the
context switch. Because every address generated by the CPU is checked against
these registers, we can protect both the operating system and the other users'
programs and data from being modified by this running process.
If a device driver (or other operating-system service)
is not commonly used, we do not want to keep the code and data in memory, as
we might be able to use that space for other purposes. Such code is sometimes
called transient operating-system code; it comes and goes as needed
given N allocated blocks, another 0.5 N blocks will be lost to fragmentation.
That is, one-third of memory may be unusable! This property is known as the
50-percent rule
@ajaynitt
Copy link
Author

How does 'Mutex' work?

As mentioned before, the data of a mutex is simply an integer in memory. It’s value starts as 0, meaning that it is unlocked. If you wish to lock the mutex you can simply check if it is zero and then assign one. The mutex is now locked and you are the owner of it.

The trick is that the test and set operation has to be atomic. If two threads happen to read 0 at the exact same time, then both would write 1 and think they own the mutex. Without CPU support there is no way to implement a mutex in user space: this operation must be atomic with respect to the other threads. Fortunately CPUs has a function called “compare-and-set” or “test-and-set” which does exactly this. This function takes the address of the integer, and two integer values: a compare and set value. If the compare value matches the current value of the integer then it is replaced with the new value. In C style code this might like look this:
1
2
3
4
5

int compare_set( int * to_compare, int compare, int set );

int mutex_value;
int result = compare_set( &mutex_value, 0, 1 );
if( !result ) { /* we got the lock */ }

The caller determines what happens by the return value. It is the value at the pointer provided prior to the swap. If this value is equal to the test value the caller knows the set was successful. If the value is different then the caller knows the value has not changed. When the piece of code is done with the mutex it can simply set the value back to 0. This makes up the very basic part of our mutex.

@ajaynitt
Copy link
Author

template
class LockGuard
{
public:
explicit LockGuard(Lock& resource) : m_lock(resource)
{
m_lock.acquire();
}

  ~LockGuard() {
     m_lock.release();
  }

private:
LockGuard(const LockGuard&);
LockGuard& operator=(const LockGuard&);

  Lock& m_lock;

};

@ajaynitt
Copy link
Author

POSIX (/ˈpɒzɪks/ poz-iks), an acronym for "Portable Operating System Interface",[1] is a family of standards specified by the IEEE for maintaining compatibility between operating systems

@ajaynitt
Copy link
Author

A Thread Group is a set of threads all executing inside the same process. They all share the same
memory, and thus can access the same global variables, same heap memory, same set of file descriptors, etc. All these
threads execute in parallel (i.e. using time slices, or if the system has several processors, then really in parallel).

@ajaynitt
Copy link
Author

The advantage of using a thread group over using a process group is that context switching between threads is much
faster than context switching between processes (context switching means that the system switches from running one
thread or process, to running another thread or process).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment