The cloud liberated computing power, applications, and data storage from reliance on specific hardware managed in a specific location. But the Cloud does typically exist in a specific location: a facility somewhere with racks of servers.
As the peripheral devices of the Internet of Things (IoT)—in homes, in factories, on ships, on drones, on offshore oil platforms—generate more data and make more demands on the network and the resources of the cloud, two main problems start to emerge:
The solution is to move some computing power to the edges of the system, something known as fog computing.
The term fog computing was created by Cisco—but originally it had nothing to do with the IoT. A decade or so ago, hybrid cloud architectures combining existing on-premises computing power with public cloud resources were increasingly popular. Cisco presented fog computing as a way of reducing latency between local and remote computing resources, but had trouble getting uptake in the market. Then the IoT exploded, and Cisco at last found the perfect implementation of the concept. It has helped found the OpenFog Consortium to promote the use of fog computing.
Not that the hybrid cloud problem has gone away—around half of all enterprises maintain significant computing resources on-premises, precisely because of latency issues. But IoT markets are where the real growth will come.
There are actually two related concepts at play: edge computing and fog computing. Both models push data processing capabilities closer to where the data originates, but differ in their emphasis. Crudely, fog computing locates the intelligence in the local area network while edge computing puts it inside the devices themselves.
Sensors in devices harvest data. This data can either be processed in the cloud, on the network or in the device themselves. For obvious reasons, data that is processed in the cloud is going to take longer due to the data being processed further away, so there is some delay involved. Fog computing would allow for some processing of that data on the network and the cloud, while edge computing brings at least some processing of data inside the device itself.
Processing data at or near the device isn’t new. After all, local processing and control was the only way to do things before networks connected everything. But now we can distribute data storage and processing in a balanced way, benefiting from the different strengths of the cloud and the fog.
Then there are the security implications. Depending on the data you are processing, you may want to have data be processed on your network or devices rather than sending data over the wire to be processed in the cloud. When you process data in the cloud there are higher risks with man in the middle attacks. That isn't so if the data stay within the confines of your network. If you are strapped by a regulation, then you will need to consider this.
If bandwidth were infinite and latency zero, there would be no need for any edge devices at all. There is no theoretical advantage to computing at the edge, just a practical one.
Cloud computing has substantial benefits that it is important to keep in mind. It takes place in a controlled environment with secure access, in rack-mounted processors that are easily monitored, diagnosed, upgraded, and replaced.
Cloud data centers have been sited where land and other costs are low, which means they tend to be far from population or industrial centers. This increases both the network loading and the latency.
Related: What Is Fog Computing?
Fog computing minimizes these problems, but comes with its own costs. Edge and fog devices have to be provisioned individually, must often be rugged to function in difficult environments, require a physical visit in case of a hardware problem or upgrade, and need to have their own power supplies.
With local processing, every bit of generated data does not need to be sent to the cloud for analysis. Instead, much smaller amounts of cleaned, filtered, and preprocessed data can go to the cloud for larger-scale analysis. The conclusions reached there will inform decisions on fog functionality, improving the balance between the two.
Autonomous vehicles are perhaps the clearest example of how the hybrid cloud/fog system will work. Individual cars absolutely need to make split second decisions on their own, without relying on remote computing power. But they will also be generating data on delays, weather, road conditions, and their own internal status that will flow into the cloud to be combined with data from millions of other vehicles, eventually leading to decisions that improve vehicle and traffic performance.
The cloud is all powerful and scales, but the fog is its obedient and highly capable servant. Exactly how computing at the edge will finally be implemented will come out of a competition between various vendors, consortiums, and standards. The result will be a complex hybrid system that puts computing power and decision making at whatever location is optimal.
Alex Jablokow is a freelance writer who specializes in technical and healthcare business. He blogs about the Internet of Things, software, inertial guidance systems, and other topics for business clients. Sturdy Words, his freelance content business, is at www.sturdywords.com.
Let our experts teach you how to use Sitefinity's best-in-class features to deliver compelling digital experiences.
Learn MoreSubscribe to get all the news, info and tutorials you need to build better business apps and sites