Composable infrastructure treats compute, storage, and network devices as pools of resources that can be provisioned as needed, depending on what different workloads require for optimum performance. It’s an emerging category of infrastructure that’s aimed at optimizing IT resources and improving business agility.
The approach is like a public cloud in that resource capacity is requested and provisioned from shared capacity – except composable infrastructure sits on-premises in an enterprise data center.
IT resources are treated as services, and the composable aspect refers to the ability to make those resources available on the fly, depending on the needs of different physical, virtual and containerized applications. A management layer is designed to discover and access the pools of compute and storage, ensuring that the right resources are in the right place at the right time.
The goal is to reduce underutilization and overprovisioning while creating a more agile data center, says Ric Lewis, senior vice president and general manager of the software-defined and cloud group at Hewlett Packard Enterprise, which offers the Synergy composable infrastructure platform.
“When a customer logs onto a public cloud, they grab a set of resources: compute, storage, fabric. ‘I need this much stuff to be able to run this application. Please give that to me,” Lewis says. “I’ll run this application, and when I’m done, I’ll give it back to you and you can use it with somebody else,’”
“What we did with composable infrastructure is build that into the platform. We can do the same dynamic resource sharing.”
Composable vs. converged vs. hyperconverged infrastructure
Converged infrastructure involves a preconfigured package of software and hardware in a single unit that enables simplified procurement and easier operation than traditional servers, storage and networking switches. A converged infrastructure is typically designed for a specific application or workload, and while the compute, storage and networking components are physically integrated, the management of those discrete resources often remains siloed.
Hyperconvergence adds deeper levels of abstraction and greater levels of automation for easy-to-consume infrastructure capacity. In a hyperconverged environment, the software-defined elements are implemented virtually, with seamless integration into the hypervisor environment. Organizations can expand capacity by deploying additional modules.
Like a converged or hyperconverged infrastructure, composable infrastructure combines compute, storage and network fabric into one platform. But it’s not preconfigured for specific workloads like a converged or hyperconverged infrastructure is.
“As long as you want to do software-defined storage for virtualization – that’s really solved well” with hyperconvergence, Lewis says. But, with data-center customers in particular, “they’re not just doing virtualized environments, and they’re not doing all of them on software-defined storage. They’re doing big-scale things where they’re running virtual machines. They’re also running bare metal,” he says. “Customers want a simple environment for VMs, bare metal, containers and for their new cloud-native applications.”
Hyperconverged infrastructure also has a scalability limitation; typical hyperconverged environments scale to 20 or 30 nodes, Lewis says. “Hyperconvergence is great, but it doesn’t solve all workloads, and it doesn’t scale to the level that customers are going to want to scale to.”
Composable infrastructure, which is also described as “infrastructure as code” or “disaggregated infrastructure,” takes things a step further, with more fluid resource pools.
Flexible, reapportionable resources
In a composable infrastructure world, resources can be reconfigured to compose the exact-sized infrastructure environment each workload needs. A developer could request a virtual machine with any combination of compute, network and storage capacity, for example, and when the workload is done running, those infrastructure resources are delivered back to the pool for other users to access. One workload could be a compute-heavy application requiring a lot of CPU power, while another could be memory-heavy.
“The application can grab whatever it needs at the time that it runs, and when it’s done, it returns it to the pool. It’s not just sitting there dedicated to running VMs, like a hyperconverged environment,” Lewis says. The result is that the on-premises infrastructure looks more like public-cloud infrastructure-as-a-service environments.
HPE is among the first to make a composable infrastructure platform available.