(Submitted on 13 Dec 2017 (v1), last revised 30 Jan 2018 (this version, v2))

Abstract: The field of deep learning has seen significant advancement in recent years. However, much of the existing work has been focused on real-valued numbers. Recent work has shown that a deep learning system using the complex numbers can be deeper for a fixed parameter budget compared to its real-valued counterpart. In this work, we explore the benefits of generalizing one step further into the hyper-complex numbers, quaternions specifically, and provide the architecture components needed to build deep quaternion networks. We go over quaternion convolutions, present a quaternion weight initialization scheme, and present algorithms for quaternion batch-normalization. These pieces are tested in a classification model by end-to-end training on the CIFAR-10 and CIFAR-100 data sets and a segmentation model by end-to-end training on the KITTI Road Segmentation data set. The quaternion networks show improved convergence compared to real-valued and complex-valued networks, especially on the segmentation task.

Submission history

From: Chase Gaudet [view email]
[v1] Wed, 13 Dec 2017 04:19:24 GMT (117kb,D)
[v2] Tue, 30 Jan 2018 16:08:56 GMT (117kb,D)

Tags: