EDGE COMPUTING
Timeline of Computing
The timeline of computing looks something like this: At first, there was one big computer; after that came the UNIX era; then we were introduced to personal computers, which led to the cloud computing era. And now, we find ourselves in the next stage of computing, named the Edge computing era. Nowadays we use our personal computers to access centralized services such as Gmail, Google Drive, cloud storage, office 365… The personal assistants on our smartphones and smart speakers are powered by centralized cloud artificial intelligence.
We can say with certainty that the new opportunities of cloud services are located at the “EDGE.” Even though we are still in the cloud computing era – we can say that our cloud infrastructure relies much on hosting companies and compute power of the very few who are providing it for us: Amazon, Microsoft, IBM, and Google.
Why do we call it Edge?
The word Edge is used in the context of geographic distribution. Edge computing is a kind of computing that is conducted in the source of the data or near the data source. Where cloud computing requires a data center in order to function, edge computing would not influence the disappearance of the cloud, but rather bring the cloud to its end users. Having said that, let’s examine why edge computing is being praised by the tech communities around the world.
Such a way of computing on the edge of a device is changing the way data is processed, handled, and delivered to millions of devices around the world. The exponential rate of growth in the IoT industry and connectivity – drives the market need for real-time computing power that is at the core of edge computing systems.
5G wireless networking technology allows edge computing systems better support of real-time applications and analytics, AI and robotics, video processing, all of which can be used in self-driving cars, smart home environments, industry 4.0, to name a few
Comments