Skip to main navigation menu Skip to main content Skip to site footer

Automated search for optimal convolutional neural network factorization


Deep Neural networks (DNNs) are the state of the art technique
when it comes to artificial intelligence tasks relating to computer vision. Usage of DNNs is wide spread across multiple industries, and
entertainment. Most notably is the use of Convolutional Neural Networks(CNNs) architectures for object detection and classification,
and even more recently information retrieval. However, one downfall of CNNs is their computational cost, even on trivial tasks. The
reason for such high computational cost lies in the high flop number of kernel convolution, the core operation which is built upon a
CNN. Hence presenting the need for compression of floating number information in CNNs. In this work we explore the techniques of
CNN compression using tensor decompositions. Furtheremore, we
aspire to build an automated tool to execute a grid search through
the space of all possible factorizations of a CNN and pick an optimal
compressed network representation with respect to performance requirements.