As IT trends toward consolidating fractured layers of specialized, separately managed workloads, data centers are transforming into hyper-converged infrastructures that combine compute, storage and network resources in a single system. But despite HCI’s deployment and cost advantages, its purpose-built, monolithic design can have limitations.
Enter HCI 2.0: disaggregated hyper-converged infrastructure. “DHCI is vastly superior to HCI. It offers all of the same benefits that you receive with HCI, but it also addresses many of the pain points that have come to be associated with the original HCI technology,” says Brien Posey, a Microsoft MVP whose 30 years of IT experience include serving as lead network engineer for the U.S. Department of Defense, CIO for a chain of hospitals and healthcare facilities, and a network administrator for major insurance companies.
One of those HCI “pain points” centers on the number, composition and scalability of nodes that plug into the server chassis. Anywhere from four to 10 nodes typically comprise an HCI cluster. “Now, the problem with that,” Posey says, “is that each node consists of not just storage but also compute and network resources. So, by adding nodes as a way of expanding storage, the organization is effectively paying for compute and network resources that it really doesn’t need.”
According to a survey of IT professionals by Enterprise Strategy Group (ESG), a division of TechTarget, 92% of respondents said the ability of HCI nodes to scale compute and storage resources independently of each other was either critical or important to avoid adding new nodes or underutilizing an existing node.
“The advantage to switching to a disaggregated model,” Posey explains, “is that it allows you to avoid paying for these hardware resources that you don’t actually need … the individual components can be upgraded individually.”
In this video, Posey compares traditional, converged and hyper-converged infrastructures and details why disaggregated HCI “gives you the ability to pay only for what you need … [and] delivers that consumption-based IT pricing to your HCI environment.”
Brien Posey: Hi, I’m Brien Posey, and today I want to talk about why one size doesn’t always fit all, especially when it comes to hyper-converged infrastructure. Before I get going, I want to give you some idea of where I’m going with this presentation. I’m going to start out by defining some basic terms. I’ll be talking about things like traditional infrastructure and converged infrastructure. And then from there, I’m going to talk about how those compare with traditional hyper-converged infrastructure. I’ll also be talking about the pros and cons of hyper-converged infrastructure, and how disaggregated hyper-converged infrastructure improves upon standard hyper-converged infrastructure — or HCI, as it’s often referred to.
Before I get into the main part of this presentation, I just want to take a moment and introduce myself and give you a little bit of information about my background. Again, my name is Brien Posey, and I’m a freelance technology author and speaker. I’m also a 19-time Microsoft MVP. Before I went freelance, I worked as a network administrator for some of the largest insurance companies in America. I was also the lead network engineer for the United States Department of Defense at Fort Knox. And, I worked as a CIO for a national chain of hospitals and healthcare facilities. In addition to my IT background, I’ve also spent the last several years training as a commercial astronaut candidate in preparation for a mission to study polar mesospheric clouds from space. So, that’s just a little bit about me. Let’s get on with the presentation.
So, I want to begin the discussion by talking a little bit about traditional infrastructure. Traditional infrastructure, at least for the purposes of this discussion, can be thought of as the old way of doing things. Although, in all fairness, traditional infrastructure is still very widely used today. The idea behind traditional infrastructure is that an organization purchases all of the various data center components individually. This includes things like servers, storage, network components and things like that. Now, keep in mind that in a traditional infrastructure, the organization has the luxury of mix-matching components. So, they might purchase a server from one vendor, they might purchase storage hardware from someone else, and they could even purchase network hardware from yet another vendor. Of course, they also have the option of purchasing these components from a common vendor. The point is that, in a traditional infrastructure, all of the components are purchased individually.
There are two main advantages to using traditional infrastructure. The first advantage is granular component selection. In other words, because everything is being purchased individually, the IT department has the option of choosing exactly the components that they want to use. If there is a specific server that they think would be especially well suited to servicing a particular workload, then they can certainly use that server. Likewise, with the storage and with other components. However, this isn’t the only advantage to traditional infrastructure. The other advantage is that component purchases could be spread out over time. Now, this probably isn’t something that would happen during the initial acquisition. But as hardware refreshes occur over time, those purchases can be spread out. For example, a lot of organizations will purchase a new server every three to five years. Well, server hardware can be expensive. So, at the five-year mark when the organization purchases new servers, they might not necessarily want to purchase new storage in the same year. So, they might do that the next year, and then purchase new networking hardware the year after that, and then move back to purchasing new server hardware on the next cycle. So that way, the organization isn’t having to purchase all new data center equipment all at once. So, those are the two main advantages to traditional infrastructure.
Of course, there are also some significant disadvantages associated with using the traditional infrastructure model.
One of the primary disadvantages to using this approach is that there’s no guarantee that the components that you purchase are going to be performance matched to one another. Suppose for a moment that you were to purchase two different components. These can be anything, maybe a server and storage, or storage and networking. It doesn’t really matter for the purpose of this example. But let’s suppose that one of those components was really high-end, whereas the other was a bit more modest in scope. In this example, the two components are not performance matched. And, as a matter of fact, the more modest component would hold back the performance of the high-end component. So, what that actually means is that even though you spent a lot of money on a high-end component, you’re not going to get the maximum benefit out of that component because the other components can’t keep up with it. So, that’s what I mean when I say that components may not be performance matched. Now, it is possible to performance-match components in a traditional infrastructure. But, in order to do so, the IT staff has to do a lot of hard work to make sure that they’re purchasing components that mesh well with one another and that are performance matched. It’s not something that’s going to happen automatically.
Another potential disadvantage is that it might be impossible to take advantage of new features. This is especially true in the rolling-refresh model that I described a moment ago — in which new server hardware is purchased one year and then storage the next year, the network components the next year and so on — because when you purchase new hardware, that new hardware is likely going to include new features. But, if you’re bringing that new hardware into your data center to run alongside existing hardware that’s aging and is going to be refreshed in the next year or two, there’s a good chance that the aging hardware isn’t going to be able to support the new features on the newly purchased hardware. So eventually, of course, you’ll be able to take advantage of those features when you purchase the next round of new hardware. But that newly purchased next round of new hardware is probably going to contain new features as well that you might not be able to readily use. So, that’s just something to think about.
One more issue that’s commonly experienced in the traditional infrastructure environment, and an issue that I’ve experienced personally several times, is that getting technical support can be difficult. This is especially true if you’re mix-matching products from various vendors because what might happen is that one vendor might blame another vendor for the problems that you’re having. If, for example, you purchase a server from one vendor and storage from another, you might have the server vendor blaming the storage vendor for your problems, or vice versa. In any case, when vendors start blaming one another, it can become extremely difficult to get the help that you need and get your issue resolved. So, you can avoid a lot of the vendor finger-pointing by purchasing hardware from a common vendor.
Now that I’ve spent a little bit of time talking about traditional infrastructure, I want to turn my attention to converged infrastructure, which you’ll sometimes see abbreviated as CI. Converged infrastructure is similar to traditional infrastructure in that it uses components such as compute, storage and network. But, one thing that’s really different between converged infrastructure and traditional infrastructure is that the components all come from a single vendor and they’re performance matched. Not only that, but the components are actually certified to work together and they’re sold as a complete system. Some people have referred to this as the “appliance approach” to networking because you get compute, storage and network resources all as a package deal sold under a single SKU. Now, it’s worth noting that even though the components are sold as a set, they can be used individually if an organization is so inclined. For example, an organization could take the server from a converged infrastructure set and use it as a standalone server if they had a need to do that. But, the big takeaway from converged infrastructure is that the converged infrastructure components are sold as a set from a single vendor, and all the components are guaranteed to work together and they’re performance matched. Perhaps more importantly, the vendor acts as the sole source of technical support. So, you don’t have to worry about the whole vendor finger pointing issue. All the components come from one vendor who guarantees them to work together and that vendor also guarantees that they will support those components should issues arise in the future.
So, now I’m going to turn my attention to hyper-converged infrastructure — or HCI, as it’s often called. HCI has a lot of similarities to converged infrastructure, but there are several key differences that you need to be aware of. Like converged infrastructure, HCI is sold as a bundle from a single vendor. The HCI bundle consists of storage, compute and network resources that are all performance matched and certified to work together.
One of the key differences between HCI and converged infrastructure, however, is that whereas converged infrastructure components can be used separately should the need arise, HCI is tightly integrated into a node model. When you purchase HCI, you generally receive an empty chassis and a series of nodes, and each one of these nodes s a modular component consisting of compute, storage and network resources that plugs into the chassis. So, because the hardware is integrated into a node, it can’t be separated out and used individually.
Another major difference between HCI and converged infrastructure is that HCI places a major emphasis on software. As a matter of fact, the HCI vendors’ ultimate goal is to make the hardware nearly invisible and focus almost solely on the software. So, having said that, though, HCI tends to be designed for a very specific purpose. For example, there are HCI deployments that are designed specifically for virtualization. There are also HCI deployments that are meant for desktop virtualization, or for backup or for various other purposes. The point being, though, that HCI deployments are purpose-built. So, what this means is that when you purchase an HCI deployment, you’re not just purchasing hardware; you’re also getting software. So take, for example, an HCI deployment that’s designed for virtualization. Typically, what you would receive with such a deployment is a server chassis, multiple nodes to go in that chassis, and then you would also typically receive a hypervisor, a management component — something like Microsoft System Center, Virtual Machine Manager or VMware vCenter Server, something like that — and then you would also typically receive a software component that’s designed to help you to manage the individual nodes. So, this is a hardware management tool that’s specifically designed for the hyper-converged infrastructure deployment.
As you can imagine, there are a number of advantages associated with using HCI.
One such advantage is that HCI is designed to be really easy to deploy — far easier than a traditional system. As a matter of fact, you can unbox an HCI deployment in the morning and be running workloads on it by the afternoon. It’s that easy to set up. But, HCI is also meant to be easy to operate. Remember, one of the goals that the vendors have behind their HCI deployments is to place the emphasis on the software rather than the underlying infrastructure. So, this means that admins are free to focus on workloads — things like virtual machines or virtual desktops — rather than having to constantly monitor the underlying hardware.
Additionally, HCI packages are often built on commodity hardware. Now, in the beginning, nearly all HCI packages relied on commodity hardware. Today, there are some more premium packages out there, but the fact that you can get HCI packages that are built on commodity hardware means that you can help to keep the price low.
And then finally, HCI is meant to be scalable. Remember, HCI consists of a series of nodes that plug into a chassis. So anytime that you need to scale, all you have to do is purchase additional nodes and plug those nodes in. Now, yes, there is a limit to the number of nodes that a chassis can accommodate. But, you can deploy multiple chassis and then fill those chassis up with nodes as you need to scale your workloads.
Even though HCI is designed to be comparatively inexpensive and to be easy to deploy and operate, there are some disadvantages associated with using it. So, let’s talk about some of those disadvantages. One such disadvantage is that some vendors don’t allow the individual components to be upgraded. Now, what do I mean by that? Well, you’ve got to remember that every vendor has their own way of doing things. But, some vendors don’t allow you to do component level upgrades within the HCI infrastructure. Suppose for a moment that you decide that your servers need more memory. You may or may not, depending on the vendor, be able to install additional memory into a server module. Likewise, if you decide that you need more storage space, you might be prevented from installing larger hard disks inside a storage module.
Another potential disadvantage is that because HCI depends so heavily on software, HCI is generally designed for a specific purpose and might not be able to be used for anything else. And then finally, the node design limits your options for getting better performance. Remember, HCI is designed to be modular. So, if you need to scale workload or you need increased performance, the way that you would typically do that is by purchasing and installing additional nodes. And the same thing also goes for storage: If you need increased storage capacity, then the way that you would typically do that is by purchasing additional nodes, each of which come with integrated storage. This is going to be a particularly important point to keep in mind as we go along and begin talking about disaggregated hyper-converged infrastructure.
Now that I’ve spent a little bit of time talking about hyper-converged infrastructure, or HCI, I want to turn my attention to a newer technology called disaggregated hyper-converged infrastructure, or DHCI for short. DHCI is sometimes referred to as HCI 2.0. As you’ve probably guessed from the 2.0 designation, DHCI is vastly superior to HCI. It offers all of the same benefits that you receive with HCI, but it also addresses many of the pain points that have come to be associated with the original HCI technology.
Perhaps the biggest advantage to DHCI, which I’ll go into in more depth in just a moment, is that it gives you the ability to pay only for what you need. You know, over the last several years, consumption-based IT has become really popular because it gives you a pay-as-you-go model and allows you to only pay for what it is that you actually need. And that’s exactly what DHCI does for you. It delivers that consumption-based IT pricing to your HCI environment.
So, what do I mean by consumption-based pricing? Well, as you’ll recall, one of the big shortcomings to HCI was that the only way to scale was to add additional nodes. And sometimes this meant paying for things that you didn’t really need. Suppose for a moment that an organization realizes that they need to add additional storage to their HCI deployment. Well, typically, the individual hard disks aren’t going to be upgradable. So, the organization has no choice but to purchase additional nodes. Now, the problem with that is that each node consists of not just storage but also compute and network resources. So, by adding nodes as a way of expanding storage, the organization is effectively paying for compute and network resources that it really doesn’t need.
The advantage to switching to a disaggregated model is that it allows you to avoid paying for these hardware resources that you don’t actually need. In a disaggregated environment, the individual components can be upgraded individually. So if, for example, an organization realizes that they need to add storage capacity, they don’t have to go out and purchase a new node; they can simply upgrade the storage module instead. And the main benefit of that, obviously, is that it reduces the organization’s cost. But, a secondary benefit is that the organization may end up having fewer nodes that they have to manage. Because, after all, if you can increase your storage capacity without adding additional nodes, then you’re ultimately going to end up with fewer nodes that you have to take care of. And that reduces your overall hardware cost, and it can reduce your licensing cost as well.
Another way that you may be able to reduce your node count is by using more powerful hardware. Remember, with the original HCI, deployments were often based on commodity hardware. The idea was to keep the hardware relatively inexpensive so that anytime organizations needed to add capacity, they could simply install an additional node. Well, with DHCI, you’re not necessarily limited to commodity hardware. Yes, commodity hardware does exist, but it’s also possible to get powerful enterprise-grade hardware. So, because the hardware can be more powerful, you can end up needing fewer nodes than you would if you were relying exclusively on commodity hardware.
So, with that said, I wanted to wrap up this presentation by pointing you toward a resource that you might find helpful. I wrote an article for TechTarget on disaggregated hyper-converged infrastructure as it compares to traditional HCI. This article is a technical examination of what the impending demise of hyper-converged infrastructure 1.0 and the rise of disaggregated hyper-converged infrastructure means for things like planning, buying, deployment and management. And I’ve provided a link to the article at the bottom of the slide. So, I hope you found this presentation to be helpful. I’m Brien Posey, thanks for watching.