For decades, robots have been taking over repetitive tasks from humans. Thanks to recent progress in AI and specifically machine learning, the number of possible applications for robots is increasing significantly.
The robotics evolution
Until recently, robots only performed repetitive and predictable tasks in a strictly controlled environment (such as a production hall). Whenever there was a change in its tasks, things went wrong: a car chassis was damaged, or chocolate ended up next to the chocolate spread.
Today, vacuum cleaners and lawn mowers perform tasks autonomously in our uncontrolled and unpredictable world. There is a high probability that a robot will be hindered in performing tasks and will need to react accordingly. Mowing an entire lawn properly is a challenge. Doing so next to a swimming pool with playing children, their toys and pets is clearly even more difficult.
Functioning in our uncontrolled world requires intelligence. In the case of robots, this is obviously artificial intelligence. In robotics 3 key challenges are addressed with AI: perception, planning and navigation, and robot control.
Perception
Perception is about extracting an understanding from data captured by sensors. So, in the first place, a robot needs to have sensors to collect data about its environment and secondly algorithms to interpret the raw sensor data and translate it into an understanding of (changes in) the environment.
Although you would expect an RGB camera to be an excellent candidate sensor, this was not very obvious before the introduction of deep learning. At that time, interpreting images required extensive manual programming. A simple task such as image classification (e.g. ‘no hot dog’ from the ‘Silicon Valley’ series) remained a major challenge. As a result, until recently RGB cameras were unsuitable for most robotics tasks. With the arrival of neural networks and the general field of machine learning, this has changed.
Image classification was the first computer vision problem to be tackled using annotated sample images. Based on these examples, the neural network can automatically classify new images with remarkable accuracy.
Westray, a Belgian start-up that works with Verhaert, develops smart safety solutions for the shipping industry. Their multicamera system & powerfull alforithms monitor a ship’s surroundings. After an object is detected, the AI defines it: dock, container ship, rock, fishing boat, buoy, pirate boat.
Then the trajectory is calculated and, if the ship is on a collision course, the crew is notified that evasive action is needed. At this point in time the ships crew decides on the evasive actions, but it is clear Westray is taking some big steps towards autonomous shipping.
Board interface concept design for Westray ©VERHAERT
The greatest advantage of neural networks lies in their reusability for other applications. An algorithm that uses annotated examples to detects cancerous tissue in medical images, can be adapted to other applications, relatively easy by presenting it with eg. an annotated dataset of overripe, unripe, misshapen or rotting strawberries.
Soon these algorithms were rolled out in other applications with object recognition, object segmentation, object tracking and anomaly detection. Even decades-old algorithms for keypoint detection, optical flow calculations and stereo/mono depth estimations are being (partially) replaced by learning alternatives with ‘meaningful compression’ and ‘feature extraction’ due to their improved performance and easy development process.
Download the perspective to continue reading on how AI can help you to improve robotics
Looking for solutions to innovate?
Leave us your email and get in contact with Jan Calliauw, Go-to-Market Manager – Product Innovation, to help you with your innovation process.