A compelling narrative has emerged over the last decade – an inclination to public cloud, often touted as the de facto standard with a 'cloud-first, cloud-only' approach. However, the realities on the ground suggest a different story. Despite the hype around transitioning to the public cloud, there is still a strong inclination towards maintaining a robust on-premises infrastructure.
Over the last five years, organisations have leaned towards the public cloud, primarily due to a perception of cost-saving opportunities. However, the supposed affordability of the cloud-first strategy often fails to live up to its promise. A broad range of IT strategies has emerged, with many organisations finding value in balancing workloads across multiple platforms.
Balancing workloads: public cloud vs. on-premises
The key to effectively managing workloads lies in understanding their characteristics and requirements, and determining the most cost-effective, performant, and agile platform for each. Some workloads, especially those that require the native services of large cloud providers like AWS and Azure, are best suited to the public cloud. However, many workloads do not require these services; they do not fluctuate in size and scale, which makes it difficult to justify their place in the public cloud.
In particular, public entities often deal with legacy workloads that are not suitable for re-architecting and hence, are not ideal candidates for migration to the public cloud. The financial and technical hurdles of re-architecting these workloads, such as conducting cloud readiness assessments, are often prohibitive.
Navigating the shift: vendor innovation and hardware convergence
Recognising this shift towards workload-specific platform selection, hardware manufacturers are innovating to integrate private, public, and hybrid environments. The industry is moving towards a single point of management and cost control for all devices and workloads. Pioneering this transition are initiatives such as Dell's Project Alpine, which aims to natively run primary storage devices in a public cloud environment.
The rise of hyperconverged environments, backed by significant investment from industry giants like VMware, Microsoft, and Dell, has been instrumental in fostering multi-cloud management. Technologies like Azure Arc offer a unified interface to manage and maintain server and storage estates, providing an identical user experience regardless of workload location. This flexibility is critical, especially for workloads with fluctuations in size and requirement.
The decision to adopt the HCI route is not solely about capital expenditure anymore. The focus has shifted towards understanding the applications, their criticality, and the value they bring to the business operations. This paradigm shift is significantly different from the traditional hardware-centric approach. It's much easier to apportion costs, aligning with how businesses function.
The abstraction of the hardware layer in the HCI model allows your Managed Service Provider (MSP) to alleviate the technical burden. Businesses consume services on an operational expenditure basis, and can rely on their MSP to ensure their services function efficiently, meet Service Level Agreements (SLAs), and provide the requisite user experience. This allows the internal IT team to concentrate on value-adds, enhancing the overall business operations and services.
Hybrid cloud: the best of both worlds
The hybrid cloud model is gaining traction as it offers the best of both worlds. The ability to dynamically manage workloads, such as seasonal workloads that require scaling, is a significant advantage. It allows companies to pay for what they need when they need it, rather than investing in hardware for short periods of time.
Furthermore, the hybrid cloud model is strongly favoured by companies dealing with sensitive and critical information. It offers the convenience and scalability of the public cloud while ensuring data sovereignty and security through on-premises storage.
However, there remain workloads that are unsuitable for migration to the cloud, whether due to low latency or high IOPS requirements. For these, an on-premises solution remains the most viable option. A 2021 Virtana report found that 72% of respondents planned to move some applications back to on-premises environments, and Dropbox saved $75 million per year by moving some workloads back to their own data centres.
So, while the pattern of use for data centres may change, their importance in today's hybrid cloud environment remains undiminished. The future of private data centres is not a question of obsolescence, but of how they can continue to provide value in a complex, multi-cloud world.
To learn more, watch our recent webinar ‘Is the data centre obsolete?’ or get in touch today to learn how we can help.
Written by Ioan Elwick – Solution Architect, Advanced, & Jake Fielden – Technical Practice Manager, TD Synnex