A close up of a flies compund eye. `Hundreds of tiny hexagonal lenses can be seen. The eye is bathed in a rainbow light making it appear beautiful.
|

Fly Eyes, AI Brains: The Camera That Sees and Thinks in 360°

Toggle Switch
Select Reading Level

A New Way of Seeing

What happens when scientists design a camera to see like a fly — and then teach it to think using Artificial Intelligence?
The result is a device that sees in nearly every direction at once and doesn’t just see, it understands.

Imagine a camera with the eyes of a fly, seeing nearly 360 degrees at once, tracking multiple targets, and recognising patterns before you blink. Now give it a brain as powerful as a supercomputer. That’s exactly what scientists in Shanghai have done. They’ve built a camera inspired by insect eyes and boosted it with an intelligent AI “supercomputer” brain.

A close up of a fly, it's large compound eyes are prominent. In the background is glowing data streams, highlighting the combination of nature and technology.

What’s So Special About a Fly’s Vision?

First, why would we want to copy a fly’s vision? 

Arthropods (a group of animals including insects, spiders and crabs) have very special eyes. Human eyes feature one large lens, which gives us extremely detailed vision but a relatively narrow field of view. Arthropods, on the other hand, have compound eyes. They’re made up of hundreds or thousands of tiny units called ommatidia. Each ommatidium captures a small part of the scene from a slightly different angle; together, they form a mosaic image.

This gives insects:

  • Near-360° panoramic vision
  • Ultra-fast motion detection
  • The ability to track many targets at once

But there’s a trade-off: fine detail is poor. The image is pixelated and fuzzy compared to human sight.

Scientists saw an opportunity to combine the fly’s fast, all-seeing perspective with the sharp, intelligent image processing that modern AI can offer. The result? A breakthrough system that borrows nature’s tricks and then improves on them.

By the way- if you are interested in how other scientists were also inspired by nature to create robots that can swarm like insects to travel within our lungs and deliver medication next!

A figure from the Long et al academic paper. it is split into three lettered sections. In section A we can see a scenario of an arthropod surrounded by predators and prey. It demonstrates how they can simultaneously detect and identify multiple threats at once. B is an Illustration of a natural visual system consisting of a compound eye and a brain. It shows multiple small lenses connected to long photoreceptor cells. Multiples of these seats line up side by side and then connect via the optic nerve to the brain. C is an Illustration of an artificial visual system consisting of the camera with multiple lenses attatched to the digital signal processor via cables

Long, Y., Dai, B., Chang, C., Upreti, N., Wei, L., Zheng, L., … Zhang, D. (2025). Seeing through arthropod eyes: An AI-assisted, biomimetic approach for high-resolution, multi-task imaging. Science Advances, 11(21), eadt3505. https://doi.org/10.1126/sciadv.adt3505

A figure from the Long et al academic paper.

Panel A -a scenario of an arthropod surrounded by predators and prey. It demonstrates how they can simultaneously detect and identify multiple threats at once.

Panel B An Illustration of a natural visual system consisting of a compound eye and a brain.

Panel C An Illustration of an artificial visual system.

The Breakthrough: A Camera That Sees Like an Insect and Thinks Like a Supercomputer

Humans have previously tried creating bio-mimicking cameras based on fly vision, but it’s been hard to combine the wide view with the high resolution needed to make the camera truly useful…until now.

The scientists developed a dome-shaped camera with 127 tiny lenses. This was capable of capturing panoramic, full-colour images. They then paired this with a three-part AI pipeline, which not only cleared up the blurry input but could identify what it was looking at. 

An image from the academic paper. To the left is an image of the camera the scientists designed. The many small lenses can be seen arranged in a half sphere shape. To the right a black and white electron microscopy image of the camera. The lenses appear as raised bumps on the curved surface of the camera.

Long, Y., Dai, B., Chang, C., Upreti, N., Wei, L., Zheng, L., … Zhang, D. (2025). Seeing through arthropod eyes: An AI-assisted, biomimetic approach for high-resolution, multi-task imaging. Science Advances, 11(21), eadt3505. https://doi.org/10.1126/sciadv.adt3505

A figure from the Long et al academic paper. Showing an image of the camera on left and on the right a microscopic picture of the camera.

How the Hybrid Camera ‘sees’ the World. 

While the lens structure is a masterpiece of biomimicry, using nature to inspire design, the AI processing takes things a step further. It borrows ideas from insect brains. Like quickly spotting movement, tracking multiple targets, and recognising shapes. But the way the computer works is very different from real neurons. While it starts with nature’s blueprint, this camera ends up doing things no insect ever could.

How the Camera “Thinks”: The Three-Step AI Vision Pipeline

Here’s how that ‘super-brain’ processes what the camera sees — step by step.

The AI-driven pipeline works in three key stages, each building on the last:

1. Spotting What’s There — and How Far Away

The first step is all about detecting objects in the scene.
Using the data from its many tiny lenses (like an insect’s compound eye), the camera rapidly identifies where objects arewhat direction they’re moving, and even estimates how far away they might be, all in real time.
This mimics how insects instantly react to motion or threats from any angle, giving the system fast awareness of its surroundings.

2. Cleaning Up the Image

Because the raw visual data from a compound-style lens is often distorted and low-resolution, the camera needs to reconstruct a clearer image before it can do anything useful with it.
In this stage, the AI uses advanced algorithms to sharpen the view, fill in gaps, and correct distortions caused by the lens design.
This gives the system a more accurate picture of what’s really in front of it, something insects themselves can’t do to this degree.

3. Analysing What It’s Looking At

Once the image is cleaned up, the final step is recognition.
Here, the AI identifies what kind of object it’s seeing: is it a number? A colour? A specific shape?
This is where deep learning comes in: the system has been trained to classify patterns the way our brains do, allowing it to interpret visual information instead of just capturing it.
It’s like giving the camera a brain that not only sees but understands.

A figure from the Long et al academic paper. It shows the workflow of the camers. An image of a number goes through multiple stages of rotation, manipulation and analysis until the AI system is able to detect what number it is looking at and what colour it is.

Long, Y., Dai, B., Chang, C., Upreti, N., Wei, L., Zheng, L., … Zhang, D. (2025). Seeing through arthropod eyes: An AI-assisted, biomimetic approach for high-resolution, multi-task imaging. Science Advances, 11(21), eadt3505. https://doi.org/10.1126/sciadv.adt3505

A figure from the Long et al academic paper. Showing the pipeline of processing an image goes through.

To check out other AI revolutions, and how it is transforming healthcare check out this blog next.

So, What Makes This Special?

Each step on its own echoes part of how animal vision works. But put together, and boosted by AI, the system becomes something much more powerful.
It detectsrefines, and interprets the world in one smooth process. That’s what makes this camera not just inspired by biology, but a true bio-AI hybrid.

What’s Next? 

Although this camera is undoubtedly impressive, it is important to remember that its development is still at an early stage. And like most early-stage research, it does come with a few caveats. While insects process visual info almost instantly, this camera can’t match that speed just yet.
There’s a delay between capturing an image and fully processing it, especially when working with complex scenes or multiple moving objects.
Also, the whole system is still relatively large and lab-based. It relies on a bulky Graphics Processing Unit (GPU). It’s not miniaturised yet, though that’s the long-term goal.

Neverheless, this is an exciting proof of concept which could have incredible real-world uses. According to the research team, potential applications include aerial drones for wide-area monitoring, next-generation security systems, and advanced robotics. Once it has been miniaturised, it could also provide the next generation of medical cameras. Offering doctors an incredible field of vision with the level of precision needed when lives are at stake. By combining panoramic vision with intelligent processing, the technology could give machines a situational awareness far beyond current systems.

The researchers now hope to develop the system further- speeding up the processing time as well as developing the lenses, and miniaturising the entire system. 

A close up of a fly sitting on a camera lens. It's very large eyes are a noticible feature.

Final thoughts

While the work is still in its early stages, it hints at a future where biology and AI combine to solve problems we haven’t yet imagined.

Nature has long inspired scientists and inventors alike. The natural world is bursting with creativity and ingenious solutions to problems- shaped by millions of years of evolution. Now, by combining these biological blueprints with cutting-edge machine learningtomorrow’s tech looks more promising than ever. Technologies that could change how we see, move, think… and sense the world around us.

Here’s some ideas to spark a fascinating discussion.

  • If this camera can eventually see and process images better than humans, should it replace human vision in some jobs — or always work alongside us?
  • Should we develop cameras for medical use first, or for uses like security and drones? Which matters more?
  • When technology is inspired by nature, do you think it’s improving on nature or just imitating it?
  • AI is helping cameras “understand” what they see — what could that mean for safety, medicine, or nature conservation?

Big Family Question:

Would you trust a machine’s interpretation of what it “sees” more than a human’s? Why or why not?

Looking for more family-friendly discussion prompts? Explore our child-focused version of this blog [here].

Curious but cautious?

Love diving into science but not always sure what to believe? Grab our free guide:
“5 Ways to Spot Fake Science News”
It’s full of quick, practical tips to help you tell real breakthroughs from misleading headlines.

From fly-inspired cameras to AI super-brains — nature and technology are reshaping the future.
Don’t miss the next breakthrough. Subscribe to our newsletter and keep exploring the frontiers of science with us.

Keep Exploring

Want to see more incredible innovations? Check out:

Let’s Talk About It

And now we’d love to hear your thoughts. What do you think, should we design more technology by borrowing from nature? What animal sense would you like to see integrated into technology? Share your thoughts in the comments.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *