Containers: LITERATURE SURVEY
Container is an operating-system-level virtualization environment, which is used to run a number of isolated systems on a single control host. Linux containers are built on the concept of kernel namespaces. Namespaces are used to create an isolated container that has no visibility or access to objects outside the container. Processes running inside the container appear to be running on a normal Linux system although they are sharing the underlying kernel with processes located in other namespaces. There are two main types of containers:
1. System container - This container acts like a full-fledged OS and runs OS processes like init, syslogd etc.
2. Application container - This container runs a single application and uses limited resources.
Both these types of containers are useful in different
…show more content…
This advantage of containers over VM’s plays a significant role in optimizing use of resources especially when the host OS is highly hardware intensive. As a result of this more number of containers can be deployed on the same host as opposed to virtual machines (varying from 2 to 6 times the number of VM’s).
3 | P a g e
The following figure highlight the major difference between VM’s and Containers:
Today intensive research is being put into the field of containers. Huge corporations like Google and Twitter are today investing heavily in developing container technology. The Docker project and Google’s Kubernetes project are two open source projects which have caught huge traction recently. We shall be using these softwares for supporting different modules of our project.
4 | P a g e
The graph below compares the resources used by containers and virtual machines.
Figure : Docker vs KVM
( source :
Containers hold the components necessary to run the desired software, such as files, environment variables, and libraries. The host OS also constrains the container's access to physical resources – such as CPU and memory – so a single container cannot misbehave and consume all of a host's physical
“Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources which included networks, servers, storage, applications, services and so on that can be rapidly provisioned and released with minimal management effort on service provider interaction” (Mell & Grance, 2011, p. 2). It is an on-demand self-service that could supply computing capabilities unilaterally without requiring human interaction. Capabilities are delivered over broad network access and used by heterogeneous thin (Boss, Malladi, Quan, Legregni, & Hall, 2007, p. 4). The computing resources are pooled in order to serve multiple consumers using either the multi-tenancy or the virtualization model according to consumer demand. And computing resources have no up-front contract as
Hypervisor is the virtualization layer which is responsible for the virtualization. a virtualization platform that allows multiple virtual machines to run on a physical host at the same time
This is more of a container benefit than Kubernetes itself but, in many cases, build
Virtualization is the coming up with virtual versions of physical modules on a physical host. Virtualization provides a platform where many virtual machines can be run on a single physical machine at the same time. ESXi host is the single physical machine where virtual machines are run.
Cloud computing relies on sharing computing resources in a virtualized world rather than owning that locally to handle applications. Cloud adoption across the enterprise is a growing reality. Cloud infrastructure is also becoming popular in the scientific research and academic community because of its ability to provide a large number of computing resources for performing large-scale experiments. The advent of the Internet of Things (IOT) and smart mobile computing devices along with secure access to Big Data and immensely improved big-data analytics capabilities have pushed the computing paradigm into new frontiers. Cloud computing offers a completely new infrastructure for utilizing compute and storage
A virtual machine is essentially a software container that bundles or “encapsulates” a complete set of virtual hardware resources, as well as an operating system and all its applications, inside a software package. Encapsulation makes virtual machines incredibly portable and easy to manage. For example, you can move and copy a virtual machine from one location to another just like any other software file, or save a virtual machine on any standard data storage medium, from a pocket-sized USB flash memory card to an enterprise storage area networks (SANs).
‘Virtualization is a technology that combines or divides computing resources to present one or many operating environments using methodologies like hardware and software partitioning or aggregation, partial or complete machine simulation, emulation, time-sharing, and others’, states Chiueh, Susanta Nanda Tzi-cker, and Stony Brook (2005) when referring to the what defines virtualization as a whole. They then go on to convey the multiple applications of virtualization over a wide range of technological areas, those mentioned are ‘server consolidation, secure computing platforms, supporting multiple operating systems, kernel de-bugging and development, system migration’ amongst many others. Amza, Ashraf Aboulnaga Cristiana, and Kenneth Salem in contrast define virtualization as ‘separating the abstract view of a computing resource or service from the implementation of this resource or service’, which translates to allowing virtual run hardware or software to appear as it would in a physical implementation.
The provision of computing resources (i.e. processing, memory, storage and network) to allow the customer to deploy and run their own operating systems and applications. Typically, virtualisation technologies are used to enable multiple customers to share the computing resources. The service provider is only responsible for managing and maintaining the underlying infrastructure hardware and virtualisation hypervisor3 . Examples of IaaS offerings include the government IaaS platforms, Amazon Web Services (AWS), Elastic Cloud Compute (EC2), Google Compute Engine and Rackspace Compute.
There are two main approaches to virtualization: hosted architecture, and hypervisor architecture. In hosted architecture, the encapsulation layer is installed in the form of an application on the Operating System, while the hypervisor architecture involves the installing of the encapsulation layer, or hypervisor, on a clean system, which gives direct access to the system’s resources [2].
Containers allow us to build a unit to house our application and all of it’s dependencies in an inherently immutable format. This provides us with a consistent deployment unit, capable of being injected into any server and behaving the exact same, regardless of its coded environment.
One of the most important feature of virtualization is the ability to run multiple operating system just from one hardware. “Different virtual machines can run different operating systems and multiple applications while sharing the resources of a single physical computer.” (What is Virtualization, web). This is especially useful because it lets anyone run multiple operating systems from just one hardware. It allows the user to run their system to the max for a testing purpose. Just in case if their system were to crash, then all they would have to do is just boot that particular system off of a state that they would have saved periodically.
Today’s online services such are social networking, web search, etc. are based on massive working sets, real-time constraint and high level of parallelism. To simulate real-world online services in the field of cloud and data-centric computing Cloud Suite [11] is a widely used benchmark. It covers a wide range of application categories in today’s data centre i.e. media streaming, data serving, etc. Cloud Suite provides an effortless way to deploy benchmark into public and private cloud system using Docker containers. It is also integrated with Google Perfkit Benchmark.
Nokia’s CloudBand - Infrastructure Software: The Infrastructure Software can be used for several different purposes such as NFV Infrastructure and Virtualized Infrastructure Manager developed with OpenStack. The salient functionalities of Infrastructure software includes virtualization and managements of the major three categories of resources i.e., storage, compute, and network resources. It supports VNFs instantiation and executions while holding to the required robustness, security and performance levels.
In cloud, when a request is processed, one or more Virtual Machine (VM) instances have been built to use a pre-built image. When the VM instances are deployed, they are provide with request specific CPU, RAM, and disk capacity. VMs are employed on physical machines (PMs) each of which may be shared by multiple VMs. To minimize overall VM provisioning delays and operational costs, we assume that the PMs are grouped into three pools; hot (running), warm (turned on, but not ready) and cold (turned off). Maintaining the PMs in three pools (in general, multiple tiered pools) helps to minimize power and cooling costs without incurring high startup delays for all VMs. A pre-instantiated VM can be readily provisioned and brought to ready state on a running PM (hot PM) with minimum provisioning delay. Instantiating a VM from an image and