Researchers at Stanford Crack The Code of Natural Vision As New Model Reveals How Eyes Decode Visual Scene

A fundamental goal in the field of sensory neuroscience is to understand the complex mechanisms that underlie the neural code responsible for processing natural visual scenes. In neuroscience, a fundamental yet unresolved question is how neural circuits are developed in natural settings by the interaction of multiple cell types. The eyes have evolved to communicate information about natural visual scenes using a wide range of interneurons, which is crucial for transmitting visual information to the brain.

Retina’s functioning is largely based on research into how it reacts to artificial stimuli like flashing lights and noise. These might not accurately represent how the retina interprets actual visual data. The complexity of how these more than 50 different types of interneurons contribute to retinal processing has yet to be fully understood despite the fact that different computations have been detected using such methods. In a recent research paper, a group of researchers has made a significant advancement by showing that a three-layer network model is capable of predicting retinal responses to natural sceneries with amazing precision, almost exceeding the bounds of experimental data. The researchers wanted to understand how the brain processes natural visual scenes, so they focussed on the retina, which is part of the eye that sends signals to the brain.

This model’s interpretability, i.e., the ability to comprehend and examine its internal organization, is one of its key characteristics. There is a strong correlation between the responses of interneurons that were directly included in the model and those that were separately recorded. This suggests that the model captures significant aspects of the retinal interneuron activity. It successfully reproduces a wide range of motion analysis, adaptability, and predictive coding phenomena when they are just trained on natural scenes. On the other hand, models trained on white noise cannot reproduce the same set of events, supporting the idea that examining natural sceneries is necessary to comprehend natural visual processing.

The computations carried out by the model’s ganglion cells have been broken down into the individual contributions of the model’s interneurons using a methodology presented by the team. With this approach, novel theories about the interaction of interneurons with various spatiotemporal response patterns to produce retinal computations can be automatically generated, which clarifies prediction occurrences.

For the natural image sequences, the images were treated to jittering at a rate of 30 frames per second, modifications every second, and a random walk pattern that mimicked fixational eye movement data. This method produced a spatiotemporal stimulus that was more like the environment in which the retina functions.

In conclusion, the team discovered that three layers of neural processing, resembling the retinal structure, were crucial to replicate accurate responses. This model successfully predicted how real retinal ganglion cells reacted to natural images and random noise. The carefully designed model with specific layers accurately emulated the behavior of these cells. Thus, the research enables comprehending how the visual system interprets the world, offering insights into the intricate processes that govern natural vision.

Check out the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 29k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, please follow us on Twitter

1/Our paper @NeuroCellPress “Interpreting the retinal code for natural scenes” develops explainable AI (#XAI) to derive a SOTA deep network model of the retina and *understand* how this net captures natural scenes plus 8 seminal experiments over >2 decades

— Surya Ganguli (@SuryaGanguli) August 14, 2023

The post Researchers at Stanford Crack The Code of Natural Vision As New Model Reveals How Eyes Decode Visual Scene appeared first on MarkTechPost.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *