Skip to main navigation menu Skip to main content Skip to site footer

What Can You See? Modeling the Ability of V1 Neurons to Perform Low-Level Image Processing


While not physiologically accurate, deep neural networks have a long history of being inspired by the brain. Of particular interest to computer vision researchers are the behaviour of neurons in the V1 Visual Cortex when responding to visual stimuli. Understanding how V1 neurons encode visual stimuli might offer insight on how to improve design of computer vision algorithms and "neural" representations of visual data. It has been known that neurons in the V1 cortex exhibit responses that can be modeled by 2D-Gabor filters. Knowing that, we wonder what kinds of functions a population of spiking neurons with Gabor-like encoders would be able to perform on images. In this work we explore, via spiking neuron modeling methods as described in the Neural Engineering Framework, what kinds of low-level image operations can be accurately encoded by a population of sparse Gabor encoders. Understanding what kinds of low-level image operations can be performed well by our simulated population of neurons could provide insight as to what kinds of feature extractions can plausibly be performed by the V1 visual cortex. We find that compared to the other operations tested such as Sobel filtering and high-pass filtering, our modeled V1 neuron population is better at low-pass filtering operations such as average filtering, as measured by the RMSE of decoding. The reasons for this are unclear for now and require further investigation.