Now a lot of people have become familiar with cloud computing, there’s a new cool kid on the block and it’s called edge computing. Why is the cloud suddenly losing terrain to edge computing and where will we see this popular technology pop up mostly? You’ll find out here.
Thinking local
Technology adoption works like a pendulum. First, we used local machines, big mainframes that were the heart and brains of a company. Naturally, these single points of failure came with a few risks and rather limited flexibility. Physical breaches were a real threat. An employee who’s having an off day could just walk into the server room and switch everything off. Hackers can easily find a way in and a fire could devour all this precious data in a flash.
Stuff it all in the cloud
So the pendulum swayed in the opposite direction: “Let’s move everything to the cloud!” The pros? You can make use of Amazon’s, Azure’s and Google’s immense calculation power, storage and growing range of services (AI, security, data management, databases, access & identity,…) and your infrastructure is less prone to hacking. The cloud comes with its own set of limitations though. Sending all this data to the cloud every millisecond can cause bottlenecks. Just think about all the points this data has to travel through. When you want to visit a website, your computer sends a signal to the wireless access point in your office, then on to a network switch, and next it goes to the router connected to the modem. That modem communicates with the Internet Service Provider (like Telenet or Proximus in Belgium) which sends it to other networks to finally arrive at, for example, the Amazon data servers, probably pushing it through cables at the bottom of the ocean. No wonder it’s called surfing on the internet. And then it still has to start sending all the data back to that one computer somewhere in the world. The magic of networking.
Next to the possible latency or bottlenecks, sending tons of data drains devices’ batteries (a lot of IoT devices are battery powered), uses a lot of bandwidth and there are also possible privacy issues. Everything that is sent to the cloud could be spied on. The Wired’s journalist Clive Thompson justly asks his Amazon Echo every so often “Alexa, are you eavesdropping on me?”. He makes a convincing case when he states that Alexa might need to push a lot of data to the cloud because it needs to know almost everything but what about all those smaller smart devices? A smart doorbell and smart coffee machine or washing machine. Companies creating voice assistants want to turn them into 3-CPO’s but they don’t need to be able to discuss worldly matters with you or become friends with you. They just need to understand some basic commands. That’s where edge computing comes in handy.
Data living on the Edge
Edge computing allows data from internet of things devices to be processed at the edge of the network before being sent to a data center or cloud. So the data does not need to be pushed to the cloud every millisecond. Instead, it can take decisions on its own and determines on the (aggregated) set of data if it wants to push information to the cloud.
Take the driverless car. If suddenly 5 school children pop up in front of it, it needs to be able to make a decision in a split second. It cannot depend on a good internet connection and start communicating with the cloud to respond correctly. So the calculation magic happens there where it is needed, on the edge hardware built in the car: interpreting the video stream, identifying people and activating the brakes.
A good example of a device using AI on edge is Telraam. We are currently working together with TML (Transport & Mobility Leuven) on the second version of this fine piece of technology. It’s used to measure traffic in streets. This provides a wealth of information to adapt infrastructure, signage and traffic regulations to create safer and more pleasant streets. This device is attached to a window and observes the road detecting passing traffic. Using AI algorithms the device can detect whether it’s a car, bike, truck, van or pedestrian passing by and can detect its speed. It does not send all the images featuring people or license plates into the cloud dodging privacy issues. This data is immediately deleted. Only the end result is pushed to the cloud.
Seen as a lot of devices do not need to be as smart as Alexa or 3-CPO or need the same computing power of a computer army, we will see edge computing pop up in all areas of industry. Devices will become smarter and smarter and increasingly independent, making decisions, aggregating data and only communicating sets of data to the cloud when it is useful. Training algorithms and creating (historical) dashboards will still need lakes of data on cloud infrastructures, but real-time data and decisions will more and more move away from the cloud to local hardware. This way you have the data where and when you need it. Historical data to train your AI and create insights, and real-time data to take real-time decisions. We see this happening in every industry, from predictive maintenance in large factory halls, over security cameras interpreting video streams to devices helping the elderly to live longer in their own house.
Conclusion
So what is edge computing? It’s basically a smart device that collects data on its own. It balances calculation power, energy consumption and privacy neatly. We will see a lot more devices pop up using edge computing and also fog computing, which is one level up from the edge. But that’s food for a whole new article. So stay tuned.