A team of researchers at the University of Virginia School of Engineering and Applied Science has developed an innovative biomimetic vision system inspired by the unique visual capabilities of praying mantis eyes. This innovation aims to enhance the performance of various technologies, including self-driving cars, UAVs, and robotic assembly lines while addressing a significant challenge in AI-driven systems: the inability to accurately perceive static or slow-moving objects in 3D space.
For example, self-driving cars currently rely on visual systems that, much like the compound eyes of most insects, excel at motion tracking and offer a wide field of view but struggle with depth perception. However, the praying mantis stands out as an exception. Its eyes, which overlap in their field of view, provide it with binocular vision – allowing it to perceive depth in 3D space, a critical ability that the research team sought to replicate.
The researchers, led by Ph.D. candidate Byungjoon Bae, designed artificial compound eyes that mimic this biological capability. These “eyes” integrate microlenses and multiple photodiodes using flexible semiconductor materials that emulate the convex shapes and faceted positions found in mantis eyes. This design allows for a wide field of view while maintaining exceptional depth perception.
According to Bae, their system provides real-time spatial awareness, which is crucial for applications that interact with dynamic environments. One of the key innovations in this system is its use of edge computing – processing data directly at or near the sensors that capture it. This approach significantly reduces data processing times and power consumption, achieving more than a 400-fold reduction in energy usage compared to traditional visual systems. This makes the technology particularly well-suited for low-power vehicles, drones, robotic systems, and smart home devices.
The team’s work demonstrates how these artificial compound eyes can continuously monitor changes in a scene by identifying and encoding which pixels have changed. This method mirrors the way insects process visual information, using motion parallax to differentiate between near and distant objects and to perceive motion and spatial data.
By combining advanced materials, innovative algorithms, and a deep understanding of biological vision systems, the researchers have created a computer vision system that could revolutionize AI applications. This biomimetic approach not only enhances the accuracy and efficiency of visual processing but also opens new possibilities for the future of AI-driven technologies.
As self-driving cars, UAVs and other AI systems continue to evolve, the integration of such biomimetic vision systems could mark a major leap forward, making these technologies safer and more reliable in real-world environments.