Scanning the signals
The computing landscape is changing to accommodate the future of the internet and all its users. No longer just centralized in cloud services, processing now occurs on the edge, in devices, across multiple clouds and in managed services. The future is potentially even more exciting, with the rise of quantum and biological computing, even DNA-based storage.
In the past, large-scale data processing was only needed by big enterprises. Since the advent of smartphones and the proliferation of IoT devices we’ve seen a massive increase in the amount of data produced. Analysis of data is no longer the domain of corporate data warehouses; data can be anywhere in the vast interconnected web of people, devices, cars, factories, and cities. With more data comes the requirement for more computing power.
Alongside changes in the location of data and computing, there’s a continuing evolution of computer architecture. The push to mobile has driven high efficiency chips and even designs that include “big/little” computing cores optimized for high performance and efficiency depending on workload. Signals of this shift include:
- The proliferation of devices capable of computing, like wearables, autonomous/smart cars or in-home “hubs”
- Application specific integrated circuits (ASICs) such as Google’s Tensor Processing Unit (TPU), which is designed specifically for neural network machine learning, becoming widely available
- Processor advancements for mobile devices, for example low-power chips such as Apple’s M1
- Development of practical applications for quantum computers. Examples are likely to include cryptography, medical research, and certain complex optimization problems such as those found in finance and supply chain management
Making informed computing choices enables businesses to optimize IT costs as well as provide more responsive services to consumers. In the enterprise context, all deployment options are not equal.
Despite the easy availability of cloud computing, where your data actually lives and how you process it matters. Innovative network technologies can’t overcome fundamental physics; a data center halfway round the world will always have worse latency than one local to a region or even distributed to a home or workplace.
This means there can be significant cost and customer experience implications depending on where you choose to locate your data, how you move it around and how you compute with it. Selecting the most appropriate hardware, including chip type, size, and memory, will have a direct impact on the number of instances or virtual machines you need. Some use cases — healthcare, financial services, telecommunications and industrial IoT— require lower latency than can be obtained with a centralized platform, and therefore more local computing resources.
Regardless of how resources are structured, it’s important to remember they will be seen by end-customers as your responsibility. Consumers expect their connected devices to work, and if they can’t ring their doorbell or unlock their connected car due to a cloud provider’s downtime, they’ll blame the doorbell or car vendor — not the company providing the underlying computing.