Today, the terms converged and hyper-converged infrastructure are finding their way into more architecture discussions as well as management meetings. The trend is supported by efforts to build internal cloud services for applications or data that cannot move to a public cloud as well as the ongoing drive to squeeze costs out of IT.
Unfortunately, “converged” and “hyper-converged infrastructure” are often used interchangeably which can cause confusion and send your system architects down the wrong path. Converged systems are those that provide modular computing, storage and networking functionality through a pretested system design. The components remain separate, but their compatibility is verified. For example, VCE’s Vblock Systems ventures between Cisco, EMC and VMware to provide an integrated infrastructure.
Hyper-converged systems go one step further than interoperability agreements or the testing that supports converged systems. Hyper-converged systems are all-in-one solutions. They provide computing, storage and networking in a single appliance that uses software to define how its raw capacity will be allocated. Moreover, the hardware that supports hyper-convergence is standard, off-the-shelf equipment that is compatible across a range of BIOS, OS and virtualization vendors. This reduces costs by using commodity hardware to provide high-level, software-defined infrastructure services.
The Evolution of Commodities and Software
Delivering high-level computing, storage and networking through commodity hardware is possible due to the evolution of two underlying technologies. First, the raw capacity of modern CPUs has continued to follow Moore’s law which suggests the number of transistors in a given space tends to double about every two years.
Second, the doubling of raw computing power now supports the virtualization of many infrastructure functions so that software becomes the firewall, router, switch, vCPU, etc. to replace single-function infrastructure appliances. In reality, those infrastructure functions were always software running on specialized hardware. Today, the specialized hardware is less significant.
You might ask, if infrastructure functions have always been software based, why is hyper-convergence only gaining popularity now? Part of the answer is in the CPU capacity section above. But what’s more interesting is the “Apple – izing” of infrastructure and the amazing things our software friends are doing to every User Interface (UI) on the planet. Gone are the days where there used to be a collection of cryptic commands that required an engineer to learn new languages and think in highly abstract ways. Now, you have a stylized setup screen that asks questions then takes action based on answers, much like setting up your first iPhone. In this way, many components of the data center infrastructure have simply become apps running on hyper-converged platforms.
A Cloud for Everyone
Further supporting the current popularity of hyper-convergence is the way that cloud service providers have demonstrated the benefits of combining UI, app, and software defined infrastructures. Our friends in the Amazon Web Services (AWS) ecosystem have been showing us for many years that router functions can be API calls and that computing power is a mouse click away. This has created a desire for simple-to-configure local infrastructure resources that has been met by the hyper-converged platform providers.
So who is using hyper-converged platforms? Small organizations are deploying hyper-converged platforms as their primary computing, storage and networking systems. Medium organizations are looking to hyper-converged platforms for off-site disaster recovery and business continuity services. Large organizations are deploying hyper-converged platforms to branch offices and field locations to provide local processing speed with data caching capabilities or for development and test environments.
Related Training
Data Center