A quick search online nets several definitions. The problem is, they’re not all the same. PC Magazine defines Cloud by splitting it into its different service models (SaaS, PaaS, and IaaS). The National Institute of Standards and Technology defines Cloud by focusing on its on-demand nature, broad network access, ability to resource pool, and its elasticity. The Oxford English Dictionary, on the other hand, doesn’t mention elasticity or the on-demand nature of the Cloud, and instead states that the Cloud is a place where data is stored, managed, and processed online (probably). So, who is right?
No one and everyone. Cloud is different depending on who you ask. From a business perspective, Cloud is the future, with Forrester predicting that 50 percent of global businesses will adopt at least one public cloud platform by the end of the year. From a developer perspective, Cloud is a new technology to be explored and improved upon. From a government perspective, Cloud is a potential minefield of regulation. Each of these groups defines Cloud in a different way, because each of these groups sees it in a different way.
However, at its core, cloud technology is based on a few very clear concepts: virtualization, virtual machines (VMs), and virtual private servers (VPSs).
This is the Nexcess guide to virtualization, the difference between VMs and VPSs, and how these all come together to create what we know as the Cloud.
Image credit: Thomas Kvistholt
What Is Virtualization?
Before we start talking virtual machines (VMs) and the Cloud, it’s important to understand what virtualization is.
In the shortest sentence possible, virtualization is the act of making something virtual. This means, instead of there being a physical instance of something, there is a virtual one. When it comes to the Cloud and cloud-based technology, software is made to look like and work in the same way as hardware, but not to actually be it.
For example, imagine taking your personal computer and dividing it into separate machines that can do many different things. The actual physical computer remains the same, but you divide system resources so they can function as though they were separate pieces of hardware.
One way in which this is possible is by running multiple operating systems (OS). Each OS is then allocated its own system resources. For instance, maybe your physical machine has 8GB of RAM and you allocated 2GB to one OS and 6GB to another. This would result in the creation of two virtual machines and is a form of virtualization.
Virtualization can be exploited in many different forms. This article will cover some of these later on.
Why Use Virtualization?
If you still need hardware in order for virtualization to be effective, why not just stick to traditional models of computing and hosting?
Largely because, without virtualization, we would not have access to the same levels of computational power we have today. Virtualization still requires hardware but allows for its resources to be applied in a much more versatile manner.
In much the same way as your electrical appliances draw power from a network of power stations and generators (the electrical grid), virtualized hardware draws its resources from a pool of physical hardware. As a result, virtualization allows for increased multitasking efficiency, or the ability to “chain” smaller hardware together to form larger overall machines.
Virtualization Throughout Time
Virtualization doesn’t just exist in the context of virtual machines. You may not realize it, but virtualization has become a staple in modern technology. As early as 1985, Intel made mainstream use of virtualization in its Pentium 3 (386) processors. When put in Virtual Mode, these CPUs would allow for multiple operating systems to run at the same time. The benefits were huge: increased speed, productivity, and protection against crashes. After the success of the 386 processor, Intel continued to incorporate virtualization in its microprocessors.
Not only CPUs can be made virtual, any type of hardware can be given a virtualized allocation. Personal computers frequently make use of virtual RAM in order to store programs and ensure a smooth running experience. This virtual memory is allocated from hard drive and storage space, which is repurposed to mimic the capabilities of hardware-designed RAM.
Image via: Unsplash
Virtualization and Cloud
The Cloud is a place where virtualization rules unopposed. Behind every cloud server lies either a VM, a VPS, or some other form of virtualization. Without virtualization to facilitate resource sharing, “The Cloud” would not be “Cloud”.
The mechanisms underlying this process are complicated and something which merits its own article. However, two of the more popular and common forms of virtualization in cloud hosting are virtual machines and virtual private servers.
What Is a Virtual Machine (VM)?
In 1974, Gerald J. Popek and Robert P. Goldberg defined virtual machines (VMs) as “efficient, isolated duplicate[s] of real computer machine[s]”. Over time, that definition has changed and expanded. But at the heart of it, a VM is still a virtualized computer system. Instead of existing in the real world, it is the product of software designed to replicate the hardware specifications of a machine and create its own environment.
This can be done in a number of ways. Some of which were talked about above. However, besides having a single machine play host to multiple ‘environments’ or virtual machines (VMs), multiple physical machines can also be brought together to form a single VM. In cloud computing and cloud hosting, virtual machines are managed by a hypervisor.
What Is a Virtual Private Server (VPS)?
The definition of what a virtual private server (VPS) actually is, is the subject of some controversy. In some circles, VPS is just a fancy marketing term used to describe VMs. In others, it stands on its own, marked by the use of “Private”.
In these other explanations of what a VPS is, a virtualized system is restricted to being a smaller part of a larger hardware server. Basically, a single physical server is split into multiple smaller virtual private servers. These then share the same resources. While much more cost effective for smaller websites, they do not offer the same opportunities afforded by dedicated VMs.
Virtualization and the Future
Virtualization continues to become increasingly complex. Newer models and architectures have started to be deployed, including the adoption of containers through use of applications such as Docker. These developments show no sign of slowing and public cloud adoption has continued to increase. In one survey, adoption now stands at 92 percent; up from 89 percent last year. In fact, organizations such as Gartner predict that it will likely become the default standard for computing by 2020.
Over the coming years, we will likely see cloud virtual machines continue to transform to meet the demands of those adopters and it will be an interesting sight to see.
Originally published April 26, 2018.Posted in: Nexcess