Did you know that training a large neural network produces up to 5 times more CO2 than an average car in its lifetime?! This showcases a huge problem: AI, in general, seems to consume a lot of power. Since we’re starting to collect more and more data annually, this problem will only continue to grow.
More data means more (and sometimes even bigger) models, which in turn increases energy consumption. We’re already aware of this rising problem, so we need to act NOW before it is too late. How do we do this?
1. We can start by optimizing the energy efficiency:
Connect the device to the cloud
In the cloud, hardware gets shared, meaning the power consumption gets centralized in one location. This gives us the opportunity to optimally use the hardware, optimize the power consumption and get large gains in return. Additionally, the necessary cooling for these large data centers can be optimized and the produced heat can even be recycled!
However, this poses scaling problems when the fleet of connected devices grows. To circumvent this, we can move the intelligence to the edge of the devices.
Reduce the energy consumption on the edge
For embedded or edge devices, the hardware doesn’t get shared. In this case, you need to maximize the ‘idle task’. Within the idle task, put the system in low power by using the low power level features of the hardware. Other possibilities are to reduce the complexity of the code or the number of computations.
2. Take into account the network quality:
Network-wise, not only the availability but also the quality (especially the latency) of the network should be considered with regard to power consumption. When the network connection is bad it takes the hardware longer to send the same amount of data, thus consuming more power. Research has even shown this relation is exponential!
So how can you tackle this latency problem? One possibility is to combine the data in batches before sending it to the cloud. An even better approach is to wait until the network connection is better, to reduce the required energy, to send these batches. You can also do parallel computing which is more efficient. But this ‘caching of data’ also brings problems along with it, i.e. you have to wait longer to get an answer back, so the Quality of Service (QoS) might be lower.
Both approaches discussed above come with advantages but also disadvantages. Each approach has its possibilities to help minimize energy usage, influencing the actual energy consumption.
3. If you can measure it, you can manage it!
So how do you decide which task to perform on either the edge or in the cloud? It might seem contradictory, but you can train an AI system with reinforcement learning to maximize the energy efficiency of each device (individually) throughout its lifetime. But in order to optimize something, you first need qualitative data and to get that data you need to measure whatever you want to optimize.
To make this concrete, let’s again look at optimizing the energy efficiency of a connected device. After deciding where to perform a task (edge or cloud), you can use the measurements of the power consumption to teach the model how good the decision was. This, however, also requires energy and time since you need to collect data and train the network. In the end, it’s a balancing exercise worth the effort, not only taking into account the technological but also the business side!
Key takeaways
- The energy efficiency can be optimized, either in the cloud or on the edge.
- The network quality, especially the latency, is also a key factor for energy consumption.
- With the use of AI, the optimal load balancing can be determined for each connected device (individually) to optimally use its battery.
Any questions or want to know more about this topic? Watch the complete presentation during our InnoDays webinar or get in touch!