Multi-Neighborhood Convolutional Networks

Elnaz Barshan, Paul Fieguth, Alexander Wong


We explore the role of scale for improved feature learning in convolutional
networks. We propose multi-neighborhood convolutional
networks, designed to learn image features at different levels of
detail. Utilizing nonlinear scale-space models, the proposed multineighborhood
model can effectively capture fine-scale image characteristics
(i.e., appearance) using a small-size neighborhood, while
coarse-scale image structures (i.e., shape) are detected through
a larger neighborhood. The experimental results demonstrate the
superior performance of the proposed multi-scale multi-neighborhood
models over their single-scale counterparts.

Full Text:




  • There are currently no refbacks.