Fog computing gives organizations greater choice over where data is processed — which is useful for things like Internet of Things (IoT) deployments.
What is it?
Fog computing is a decentralized computing infrastructure where data, computation, storage, and applications are somewhere between the data source and the cloud.
Many people overlap the terms of fog computing and edge computing, as both mean getting information and processing closer to the location of the data. In general, though, the 'fog' is between the 'edge' and some centralized location.
Fog computing puts the opportunities and resources of the cloud closer to where data are generated and used. This type of computing is mostly used for performance enhancements for low-latency applications or real-time decisions, but can also be used for purposes of security and compliance.
What’s in for you?
Fog computing is well-suited to latency-sensitive applications — say in production line robots. By carrying out computations in the ‘fog’, you can minimize the time between generating data at the endpoint and processing. This can also save in bandwidth costs, as the data doesn’t travel all the way back to the cloud.
What are the trade offs?
Fog computing will inevitably increase the complexity of your technology estate, as you’ll have multiple endpoints — maybe thousands of nodes — in the fog. That will introduce some management overhead and securing each endpoint will be non-trivial.
How is it being used?
Fog computing is commonly used in IoT deployments, as well as areas such as industrial automation, autonomous vehicles, predictive maintenance and video surveillance.
The emergence of 5G networks — which promise to deliver ubiquitous, high-speed, wireless connectivity, should accelerate the deployment of fog computing systems.