Skip to Main content Skip to Navigation

Algorithm-architecture adequacy of spiking neural networks for massively parallel processing hardware

Abstract : The last decade has seen the re-emergence of machine learning methods based on formal neural networks under the name of deep learning. Although these methods have enabled a major breakthrough in machine learning, several obstacles to the possibility of industrializing these methods persist, notably the need to collect and label a very large amount of data as well as the computing power necessary to perform learning and inference with this type of neural network. In this thesis, we propose to study the adequacy between inference and learning algorithms derived from biological neural networks and massively parallel hardware architectures. We show with three contribution that such adequacy drastically accelerates computation times inherent to neural networks. In our first axis, we study the adequacy of the BCVision software engine developed by Brainchip SAS for GPU platforms. We also propose the introduction of a coarse-to-fine architecture based on complex cells. We show that GPU portage accelerates processing by a factor of seven, while the coarse-to-fine architecture reaches a factor of one thousand. The second contribution presents three algorithms for spike propagation adapted to parallel architectures. We study exhaustively the computational models of these algorithms, allowing the selection or design of the hardware system adapted to the parameters of the desired network. In our third axis we present a method to apply the Spike-Timing-Dependent-Plasticity rule to image data in order to learn visual representations in an unsupervised manner. We show that our approach allows the effective learning a hierarchy of representations relevant to image classification issues, while requiring ten times less data than other approaches in the literature.
Document type :
Complete list of metadatas

Cited literature [260 references]  Display  Hide  Download
Contributor : Abes Star :  Contact
Submitted on : Monday, December 9, 2019 - 3:59:08 PM
Last modification on : Wednesday, March 4, 2020 - 3:53:57 AM
Long-term archiving on: : Tuesday, March 10, 2020 - 11:09:14 PM


Version validated by the jury (STAR)


  • HAL Id : tel-02400657, version 1



Paul Ferré. Algorithm-architecture adequacy of spiking neural networks for massively parallel processing hardware. Machine Learning [cs.LG]. Université Paul Sabatier - Toulouse III, 2018. English. ⟨NNT : 2018TOU30318⟩. ⟨tel-02400657⟩



Record views


Files downloads