CSG1102
Operating Systems
Joondalup campus
Assignment 1
Memory Management
Tutor: Don Griffiths
Author: Shannon Baker (no. 10353608)
Contents
Virtual Memory with Pages 2
Virtual Memory Management 2
A Shared Virtual Memory System for Parallel Computing 3
Page Placement Algorithms for Large Real-Indexed Caches 3
Virtual Memory in Contemporary Microprocessors 3
Machine-Independent Virtual Memory Management for Paged Uniprocessor and Multiprocessor Architectures 4
Virtual Memory with Segmentation 4
Segmentation 4
Virtual Memory, Processes, and Sharing in MULTICS 4
Virtual Memory 5
Generic Virtual Memory Management for Operating System Kernels 5
A Fast Translation Method for Paging on Top of Segmentation 5
References 6
Virtual Memory with Pages
Virtual Memory Management
(Deitel, Deitel, & Choffnes, 2004)
A page replacement strategy is used to determine which page to swap when the main memory is full. There are several page replacement strategies discussed in this book, these methods are known as Random, First-In-First-Out, Least-Recently-Used, Least-Frequently-Used and Not-Used-Recently. The Random strategy randomly selects a page in main memory for replacement, this is fast but can cause overhead if it selects a frequently used page. FIFO removes the page that has been in the memory the longest. LRU removes the page that has been least recently accessed, this is more efficient than FIFO but causes more system overhead. LFU replaces pages based on
Cache memory is the fastest memory outside of the CPU, runs at 10-30 ns per access.
Linux uses the virtual memory to free up private or anonymous pages used by a process. When a page is ‘taken off’ the physical memory, it is copied to the backing store, also sometimes named swap area. Linux uses the term ‘swapping’, which usually refers to swapping a whole process out from another, to describe ‘paging’, which is the swapping of the inactive pages of a process or processes.
Norton (Ed.). (2006). Computing Fundamentals. [University of Phoenix Custom Edition e-Text]. New York, New York: McGraw-Hill. Retrieved January 21, 2011, from CIS105 - Computers-Inside and Out.
Abstract— In this paper we have discussed about the cache and various mapping technique. Then we shift our focus on compressed caching which is the technique that tries to decrease paging request from secondary storage. We know that there is a big performance gap in accessing the primary memory (RAM) and secondary storage (Disk). The Compressed caching technique intercepts the pages to be swapped out, compresses them and stores them in pool allocated in RAM. Hence it tries to fill the performance gap by adding new level to virtual memory hierarchy. This paper analyze the performance of virtual memory compression. Further to avoid various categories of cache misses we discuss different types of cache technique to achieve higher performance. Lastly we discuss few open and challenging issues faced in various cache optimization techniques.
These low-level tasks are all now handled by the operating system, which acts as an intermediary between the “higher level” programs that accommodate users, and the lower level tasks of system management already mentioned above—memory
Virtualization is a proven software technology that is rapidly transforming the IT landscape and fundamentally changing the way that people compute. Today’s powerful x86 computer hardware was designed to run a single operating system and a single application. This leaves most machines vastly underutilized. Virtualization lets you run multiple virtual machines on a single physical machine, sharing the resources of that single computer across multiple environments. Different virtual machines can run different operating systems and multiple applications on the same physical computer. (Virtualization Basics)
There are two fundamental contributions to the study. The first is to reveal that the classification of data blocks at block level assists to identify significantly more classified data blocks as compared to work in a few earlier studies of organizing the cache blocks at the granularity of page and sub-page. The method minimizes the proportion of blocks in the directory necessary for tracking significantly in comparison to the same course approaches of level classification. It, in turn,
Abstract— There have been various cases where the processors sharing memory where one process reads and other writes when a processor is sharing is reading requires the memory to be used by various process but it does not create a problem when the process sharing memory are reading the memory the problem arises when the there is write operation is conducted. The write operation makes a change in memory which when utilized by another processor may requires use of previous value which may be not present now giving an error which has developed for the need of development of cache coherence which has it protocols implemented to prevent duplication, utilization, and updating of memory values used by various process in a control processing unit. Cache Coherence has various protocols that have been discussed are directory based, snooping, snarfing and distributed shared memory there have been various protocols that have been implanted in and new methods are being researched by various leading companies such as AMD, Nvidia, Intel and have been researched for better performance and solving issues. The paper discusses the various protocols used for implementation of cache coherence in processes.
The Hypervisor copies all the memory pages from source to destination while the VM is still running on the source. If some memory pages change or become 'dirty ' when the process is going on, they will be re-copied until the rate of re-copied pages is not less than page dirtying rate.
Real Time System support both techniques and both of them distributes memory in different way in
Memory paging is a critical element of an operating system’s performance and efficiency. Implementing paging allows processes to run even while still in secondary memory by translating virtual addresses into physical addresses. This research will look at the methods, mechanisms, and algorithms behind memory paging without regards to a specific operating system. Explanations of the paging process will begin at an elementary, top-level view, then progress into a detailed view concerning data structures, addressing, page tables, and other related elements. Intel 64 and IA-32 architecture will be examined and how paging is implemented, specifically through a hierarchical scheme and the use of a translation lookaside buffer. Issues such as thrashing and speed concerns with regards to the hardware used will also be examined and how algorithms and better hardware can influence these issues. The research will conclude with how a user can best take advantage of paging to better their memory’s performance and speed. Algorithms concerning how pages are swapped in main memory are related to the paging process and will be mentioned, but are beyond the scope of this paper.
This paper will be analyzing the development of drivers for virtual machines, as well as how virtual machines access host hardware. Topics covered will include what interest what interest I/O driver virtualization holds for the computer information science field, a general overview of virtualization, I/O hardware virtualization, and virtualization of I/O drivers.
But as the time passed and the technology advanced bringing much more new valuable stuff, there echoed a voice, more or less a chaos, “You have insufficient memory!”. Memory management had many issues that different researchers came across but above all is the memory size that fills up within no time and we are again in search of more memory to store our data which we can’t afford to lose. In the past days, computers used to store data in their Random Access Memory (RAM) that was surely not enough to store large amount of data and was even volatile (temporary). This was the reason Disks were introduced and now are widely used in our daily life.
Therefore features such as hardware caching or data dependency optimizations to increase average performance are undesirable in Chimera. Further, inter-process security is also given up to reduce system overheads in performing system calls as all the processes running on a single CPU are meant to be invoked by a single user and processes have equal privileges. Since real time programs are short repetitive operations with minimum dataset and instructions, virtual memory is not required thus eliminating memory management overhead from process context switch operations.
Now we must discuss the RAM (Random Access Memory). Even though this is wiped clean every time you shut down your computer, your random access memory stores the location of that data. Because of this, the CPU doesn’t need to go back out to the address bus and find the data’s location on your hard drive. It’s like going to the library. If you need to know the definition of a word, you don’t need to get up repeatedly, walk over to the shelf with the dictionary and open it to the corresponding page. Instead you can just take the dictionary off of the shelf and keep it at the table until you are done. Your RAM does the exact same thing, thus speeding up the process.