A Sub-Model Detachable Convolutional Neural Network

Authors
Ninnart Fuengfusin*, Hakaru Tamukoh
Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, 2-4 Hibikino, Wakamatsu-ku, Kitakyushu, Fukuoka 808-0196, Japan
*Corresponding author. Email: [email protected]
Corresponding Author
Ninnart Fuengfusin
Received 25 November 2020, Accepted 20 April 2021, Available Online 31 May 2021.
DOI
https://doi.org/10.2991/jrnal.k.210521.012
Keywords
Convolutional neural networks; supervised learning; model compression
Abstract
In this research, we propose a Convolutional Network with sub-Networks (CNSN), i.e., a Convolutional Neural Network (CNN) or base-model that can be divided into sub-models on demand. The CNN architecture, entails that feature map shapes are varied throughout the model, therefore, the hidden layer within CNN may not directly process an input image without modification. To address this problem, we propose a step-down convolutional layer, which is a convolutional layer acting as an input layer for the sub-model. This step-down convolutional layer reshapes and processes an input image to a preferred representation to the sub-model. To train CNSN, we treat the base-model and sub-models as distinct models. Each model is forward- and back-propagated separately. Using multi-model loss, i.e., a linear combination of losses from base-model and sub-models, we thus update model parameters that can be utilized in both base-model and sub-models.
Copyright
© 2021 The Authors. Published by ALife Robotics Corp. Ltd.
Open Access
This is an open access article distributed under the CC BY-NC 4.0 license (http://creativecommons.org/licenses/by-nc/4.0/).

Download article (PDF)