INTRODUCTION Live migration is defined as the movement of a running virtual machine/an application among different physical machines without the client/application being disconnected. Memory, storage and networking of a virtual machine has to be migrated from source to the destination host machine.
VM Memory migration:
Following two techniques are used for movement of the memory state of a virtual machine from the source to the destination:- 1.pre-copy memory migration and 2.post-copy memory migration.
Pre-copy memory migration:
Warm-up phase-
The Hypervisor copies all the memory pages from the source to destination, where at the same time the VM still runs on the source. If some memory pages change or they become 'dirty ' when the process is still on, they will get re-copied until the rate of re-copied pages is not less than the rate of page dirtying.
Stop and copy phase-
After the warm-up phase is completed, the VM will get stopped on the original host and the remaining dirty pages will get copied to the destination. Then the VM will resume on the destination. “Down-time”( The time taken between stopping the VM on the original host and resuming it on destination) ranges from a few milliseconds to seconds according to the memory size and applications running on the VM.
Post-copy memory migration: Post-copy VM migration is carried out by suspending the VM at the source and then a minimum amount of subset of the execution state of the VM (CPU state, registers and
Virtualization is a combination of software and hardware engineering that creates Virtual Machines (VMs) - an abstraction of the computer hardware that allows a single machine to act as if it were many machines, or a computer that does not physically exist as a piece of hardware. The hardware that is seen by the operating system is emulated in an effort to separate the physical hardware from operating system. This allows the virtual machine to be moved and hosted on any machine independent of hardware. Virtualization technology is possibly the single most important issue in IT and has started a top to bottom overhaul of the computing industry which is why many companies around the world have are using its softwares to enhance their business opportunities.
i. Server consolidation is the process of hosting multiple virtual machines (VM) on one physical server.
a virtual machine from one physical host to another in the event of a failure to provide fault tolerance.
Virtual Machine Security - Full Virtualization and Para Virtualization are two kinds of virtualization in a cloud computing paradigm. In full virtualization, entire hardware architecture is replicated virtually. However, in para virtualization, an operating system is modified so that it can be run concurrently with other operating systems. VMM Instance Isolation ensures that different instances running on the same physical machine are isolated from each other. However, current VMMs do not offer perfect isolation. Many bugs have been found in all popular VMMs that allow escaping from VM (Virtual machine). Vulnerabilities have been found in all virtualization software, which can be exploited by malicious users to bypass certain security restrictions or/and gain escalated privileges. ation software running on or being developed for cloud computing platforms presents different security challenges. It is depending on the delivery model of that particular platform. Flexibility, openness and public availability of cloud infrastructure are threats for application security. The existing vulnerabilities like Presence of trap doors, overflow problems, poor quality code etc. are threats for various attacks. Multi-tenant environment of cloud platforms, the lack of direct control over the environment, and access to data by the cloud platform vendor; are the key issues for using a cloud application. Preserving integrity of applications being executed in remote machines is an open
1. Consider a processor that supports virtual memory. It has a virtually indexed physically tagged cache, TLB, and page table in memory. Explain what happens in such a processor from the time the CPU generates a virtual address to the point where the referenced memory contents are available to the processor.
Linux uses the virtual memory to free up private or anonymous pages used by a process. When a page is ‘taken off’ the physical memory, it is copied to the backing store, also sometimes named swap area. Linux uses the term ‘swapping’, which usually refers to swapping a whole process out from another, to describe ‘paging’, which is the swapping of the inactive pages of a process or processes.
Network Based Virtualization is abstract storage of data applications from the host machine. This is well achieved through fibre channels connection between the machines and the servers running virtualization. The respective operating systems on the separate machines are not a factor to consider as they work independently. For it to achieve its expectations, the following services must be provided as below:
In computer organization and architecture, memory is the most important part of a computer. Every computer must have its own memory. Memory represents two stable or semi-stable states that are representing 1 and 0. It is capable of being written to at least once and read multiple times. In this lab, we will learn briefly on two parts of memory only, which are RAM and ROM. As we know that a memory can either be non-volatile (as seen in a read-only memory) or volatile (used in random access memory).
In conclusion, the report proposes that any migration path is applicable for migration. The only element that must get considered becomes the merits associated with each case. The resource availability is another factor that determines the path that must get followed. The complexity and scope of the cases must get dealt with during the migration
In this article, you will learn precisely what virtual memory is, the thing that your PC utilizes it for and how to design it all alone machine to accomplish ideal execution.
Virtualization is being able to give a physical device the power, through the use of software, to do more than that physical device was technically designed and able to do (Santana, 2014, p. 12). For example, a server can only run one operating system at a time. However, when a hypervisor is used in a server, the hypervisor is a layer of software that acts like the server itself so that many operating systems can be run from that one server. The hardware, in this case a server, has been virtualized. The goal is to use all of the computer’s resources all of the time, and the only way to do that is to have enough things running that the resources are being used consistently and efficiently. An analogy for this could be online classes. If each teacher only had one student, the teacher’s resources of time and expertise would not be utilized efficiently because that one student will not need help all day, every day. If the teacher is assigned to fifteen students, the students can still get help when needed from the teacher, and they would not even be aware that they are not alone in the class. Because it is an online class, the teacher does not need any more physical resources to teach an entire class than was needed for one student. The students are receiving the benefits of being taught by that teacher without needing to be with him or her physically.
Template is like the master copy of the machine which is used to create new virtual machines.
During this phase is where the consultants will determine the hardware and software that are needed to support the virtual environment. The consultants, after reviewing the results, determined that 3 servers, 3 Microsoft Windows Server 2012 R2 Datacenter license, and an application from Starwind will best meet ABC’s needs. The Windows licenses allows ABC to deploy multiple VM instances without the need to purchase extra copies of Windows Server 2012 R2 operating systems. The Starwind application will be used to create virtual storage area network (vSAN), which provides high availability and redundancy. A physical SAN, although efficient and reliable, is expensive and ABC’s current size doesn’t justify its use.
These virtual machines have a separate operating system, memory, processor and other resources which is the shared by the resources of the
A page replacement strategy is used to determine which page to swap when the main memory is full. There are several page replacement strategies discussed in this book, these methods are known as Random, First-In-First-Out, Least-Recently-Used, Least-Frequently-Used and Not-Used-Recently. The Random strategy randomly selects a page in main memory for replacement, this is fast but can cause overhead if it selects a frequently used page. FIFO removes the page that has been in the memory the longest. LRU removes the page that has been least recently accessed, this is more efficient than FIFO but causes more system overhead. LFU replaces pages based on