years before completely revolutionized cloud computing where and how we could access technology, there were regional, national and even global “grids” processing huge data sets for researchers, without requiring heavy machinery in each of their labs.
But if grid computing was the precursor to commercial clouds operated by Microsoft, Amazon, IBM, Google and Salesforce, these clouds are now enabling new kinds of computing that promise to work with billions of devices faster and more efficiently.
As Ian Foster, the Argonne National Laboratory computer scientist who pioneered grid computing in the mid-1990s, explained in a recent interview on the Argonne website:
Grid and cloud [computing] have each been made possible by increasingly widely deployed and capable physical networks, first among science labs, for the network, then in homes and businesses, for the cloud. But this dependence on physical connections means that these public services can never be universal.
Even with the millions of servers supporting cloud technology, latency (the time it takes for data to move from one point to another) remains a challenge for certain types of applications. And while commercial providers have answered many questions about the security, cost, and bandwidth requirements of cloud computing, there are limits to what the technology can do.
Enter a potential new wave of computing architecture, summed up by Foster as “the emergence of ultra-fast wireless networks that will allow access to computing anywhere, anytime, the only limit being the speed of the light”.
Check out our glossary to understand where cloud computing came from and where it’s headed.
Grid calculation
In grid calculation, computers work together remotely to handle intensive processing needs. It is mainly used in scientific research, but also useful in risk management calculations by financial companies, development tasks for video game designers, and even for special effects in movies. The entire constellation of machines, or “nodes,” run on software and standards that allow data to be easily shared across the network.
cloud computing
Described by some As a “network with a business model”, a cloud is basically a network of servers that can store and process data. Basically, in the cloud, data is retrievable on demand and delivered over the internet.
Cloud computing entered the lexicon in the mid-1990s (it is thought to have been invented by Compaq Computer). But the concept only started to take off Amazon Web Services has launched a beta version of a public cloud in 2002. Even then, it would be years before anyone outside of a small circle of early adopters knew what to do with it.
Today, cloud computing is a nearly $500 billion global business, projected by Gartner Research to reach almost 600 billion dollars in 2023.
Advanced Computing
What if the kinds of computing done in the cloud could be fully decentralized, with processing moved close to where the data was generated, or even directly to the individual devices that depend on it? Well, then you would have edge computing, carried to the edges of a network. Here, latency and bandwidth aren’t as much of an issue because computing takes place much closer to the devices themselves.
This raises new security issuesas data moves away from a secure core in the cloud, but there are businesses abound work to resolve this.
A recent estimate says edge computing will be a $150 billion industry by 2030. Expect it work together withrather than cannibalize, the cloud.
Fog calculation
Finally, there is the fog calculation, invented by network equipment manufacturer Cisco in 2014. An amalgamation of cloud computing and edge computing, it essentially brings the cloud back to the edge of a network, where it settles like a layer of fog over a landscape, sending data to and from smart devices who generate and need information—for example, self-driving cars. With fog computing, data does not need to be sent back to the main part of the cloud, reducing latency and bandwidth requirements.
#differences #grid #cloud #edge #fog #computing