According to analyst house Gartner, appetite for micro and edge computing is one of the top 10 technology trends over the next 12-18 months, with organisations increasingly looking to place datacentre resources closer to users and their devices. With media attention high on the subject, we decided to take a look at the benefits of the edge and if it signals the end of the cloud.
Edge analytics - or analysing data closer to where it's collected - is most often referred to in conjunction with the cloud and the Internet of Things (IoT) – where every day ‘items’ (think fridges, boilers, through to wearables such as fitness devices etc) are connected to the internet to provide all with valuable data. IoT is no longer future thinking but very much the here and now. But even beyond that, as consumers increasingly consume technology in all areas of life, it is pushing the drivers – in some instances – to the edge (essentially, where these items are used).
In a world with sensors everywhere and an increasing amount of data flowing in, edge analytics offers a way to derive value from data in a way that's faster, simpler and in many cases, more practical. With many IoT devices deployed in the field and in areas of low connectivity, it is important that their access to data is quick and reliable, so having a datacentre locally will save engineers a lot of time and allow them to respond to issues – such as minimising downtime – much more quickly.
With high-speed networks such as 5G being rolled out and us all demanding more from the technology we use, the edge’s benefits extend far beyond the IoT to more traditional data systems.
Five benefits of moving to the edge:
1. Improved user experience
Devices (e.g. smart phones etc.), native sensors (e.g. GPS, movement etc.) and their connectivity are in many instances a key part of the ‘system’ and user experience. In this instance, the app in the smartphone is effectively at the edge –processing the data collected locally – cleaning it, enriching it with context - effectively a client server computing model. This helps reduce latency and will become increasingly important with the introduction of high-speed networks such as 5G.
2. Location, location, location
Data centre locations are always optimised to reflect technical, physical, legal, (geo)political and commercial considerations – Microsoft and Amazon building cloud islands in the UK being a case in point. There will still be core datacentres as there is very much a need for them, but hybrid models that process and store data wherever it is more appropriate to do so are likely to arise over the next few years.
3. A better process around data collection and analytics
Edge analytics provides a way to not only allow organisations to respond to data more quickly, but to create a better process around their data collection and analytics. Edge analytics allows organisations to choose what data to keep for longer term, making data easier - and less expensive - to manage. In other words, edge analytics provides more options in terms of how data is used and helps preserve those resources that are best suited to deeper data analysis. As data generation and collection continues to expand at an exponential rate, it’s tempting to see that as a good thing. However, in the long run it risks inadvertently losing the as yet unknown hidden value.
It’s tempting to process the data at the edge, to distil and purify it, mapping to agreed standards and to ensure things like referential integrity. ETL (Extract, Transform and Load), as used in the world of data-warehouse and BI solutions is a classic example; however, in doing so you effectively and unwittingly destroy some of the nuances of the data. It’s like recording music and quantising it – yes it cleans it up and gets it in time – but actually, there may be significant value in the nuances/oddities that are thence removed.
You can often spot fraud in the odd inconsistencies people purposefully/unwittingly embed in financial data, which if you clean the data, you’ll miss. When quantising (keeping a musical part in time), it’s best done non-destructively so that you can apply varying degrees afterwards and undo it, always retaining the original. I argue that we must always do the same with all raw data if we’re not to unwittingly destroy those nuggets of nuance and value we didn’t even realise were there. It's fine to de-duplicate it and have a cleansed version, but always keep the raw data as it’s the oil of the information revolution.
4. Improved agility
In many cases, data is much more useful in real time. This is especially true of the data that flows from IoT sensors. Factory sensors, medical devices, trading and fraud detection applications and system monitoring, among many other examples, all provide data that may need to be addressed in a faster, more responsive way. This so called "stream processing" is important in applications where data needs to be processed quickly and/or continuously. As the pace of business increases, this capability is becoming more of a necessity in many industries.
5. Maximal battery life
In devices (industrial, domestic etc.) where there is no native interface connectivity (be it part of a mesh e.g. in a radiator controller), maximising battery life by only transmitting pertinent data is key.
So is the cloud dead?
Not at all. Some have argued that the trend towards edge data centres could signal the end of the cloud, but data still goes to the cloud; it’s just a change in how it is distributed to the user. In fact, we see cloud adoption becoming pervasive – already 60% of businesses are moving to the cloud, according to our inaugural Trends Report last November.
Placing data centres where it makes sense to – increasingly closer to the edge - can provide a richer, more connected experience with maximal power efficiency and battery life and resolved legal, commercial and (geo)political issues, but there will always be a need for core datacentres. Instead of considering them to be the opposite of the cloud, edge datacentres should be seen as an enabler and extension of it.