ReSuMe — new supervised learning method for spiking neural networks
Filip Ponulak
Full text: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.60.6325&rep=rep1&type=pdf
Abstract
In this report I introduce ReSuMe - a new supervised learning method for Spiking Neural Networks. The research on ReSuMe has been primarily motivated by the need of inventing an efficient learning method
for control of movement for the physically disabled. However, thorough analysis of the ReSuMe method reveals its suitability not only to the task of movement control, but also to other real-life applications including
modeling, identification and control of diverse non-stationary, nonlinear objects.
ReSuMe integrates the idea of learning windows, known from the spike-based Hebbian rules, with a novel concept of remote supervision. General overview of the method, the basic definitions, the network architecture and the details of the learning algorithm are presented. The properties of ReSuMe such as locality, computational simplicity and the online processing suitability are discussed. ReSuMe learning abilities are illustrated in a verification experiment.
Ratings & reviews
-
The first general supervised learning rule for spiking neurons
Răzvan Valentin FlorianRăzvan ValentinFlorianThis report presents the first general supervised learning rule for spiking neurons. The rule has been later developed and analyzed in more detail in (Ponulak, 2006a,b; Ponulak & Kasiński, 2006; Ponulak, 2008; Ponulak & Kasiński, 2010). ReSuMe is a learning method for spiking neurons that allows learning of arbitrary output spike trains, including performing arbitrary mappings between temporally-encoded inputs and outputs.
However, this learning rule has been conjectured without an analytical justification, by analogy to the Widrow-Hoff rule for analog neurons. To date, it has been shown analytically that ReSuMe will converge to an optimal solution only for the case of one input spike and one target output spike (Ponulak, 2006a). Simulations have shown that not all the terms of the conjectured learning rule are needed for learning (Ponulak, 2008).
I have later introduced other supervised learning rules for spiking neurons with temporal coding (chronotrons) in (Florian, 2012). I have shown there that ReSuMe is less efficient than E-learning, leading to a lower capacity.
References:
Florian, R. V. (2012). The Chronotron: A Neuron That Learns to Fire Temporally Precise Spike Patterns. (M. Zochowski, Eds.) PLoS ONE, 7(8), e40233.
Ponulak, F. (2006a). Supervised learning in spiking neural networks with ReSuMe method (PhD thesis). Poznań University of Technology, Faculty of Electrical Engineering, Institute of Control and Information Engineering, Poznań, Poland.
Ponulak, F. (2006b). ReSuMe — Proof of convergence. Technical Report, Institute of Control and Information Engineering, Poznań University of Technology, Poland.
Ponulak, F., & Kasiński, A. (2006). Generalization Properties of SNN Trained with ReSuMe. In Proceedings of the European Symposium on Artificial Neural Networks, ESANN’2006, Bruges, Belgium.
Ponulak, F. (2008). Analysis of ReSuMe learning process for spiking neural networks. International Journal of Applied Mathematics and Computer Science, 18(2), 117-127.
Ponulak, F., & Kasiński, A. (2010). Supervised Learning in Spiking Neural Networks with ReSuMe. Neural Computation, 22(2), 467-510.
Notice: Undefined index: publicationsCaching in /www/html/epistemio/application/controllers/PublicationController.php on line 2240
Share comment